CN115174495B - Resource allocation method based on parallel routing and related equipment - Google Patents

Resource allocation method based on parallel routing and related equipment Download PDF

Info

Publication number
CN115174495B
CN115174495B CN202210701835.4A CN202210701835A CN115174495B CN 115174495 B CN115174495 B CN 115174495B CN 202210701835 A CN202210701835 A CN 202210701835A CN 115174495 B CN115174495 B CN 115174495B
Authority
CN
China
Prior art keywords
flow
amount
resources occupied
value
return value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210701835.4A
Other languages
Chinese (zh)
Other versions
CN115174495A (en
Inventor
王旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202210701835.4A priority Critical patent/CN115174495B/en
Publication of CN115174495A publication Critical patent/CN115174495A/en
Application granted granted Critical
Publication of CN115174495B publication Critical patent/CN115174495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/78Architectures of resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a resource allocation method based on parallel routing and related equipment. The resource allocation method based on the parallel routing comprises the following steps: executing a first procedure; acquiring the amount of resources occupied by a first process and the amount of resources occupied by a second process; if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is greater than a preset resource threshold, discarding the second process according to a preset rule; if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is not greater than a predetermined resource threshold, executing the second process; wherein the first process comprises: calling a first interface to acquire a first return value; cloning the first return value to obtain a cloned value; the second process includes: calling a second interface to acquire a second return value; comparing the clone value with the second return value, and returning a comparison result. The embodiment of the application can solve the problem that the migration efficiency is affected due to system blocking caused by doubling and expanding of local flow.

Description

Resource allocation method based on parallel routing and related equipment
Technical Field
The present invention relates to the field of computer and communication technologies, and in particular, to a resource allocation method and related devices based on parallel routing.
Background
Currently, in the internet field, many systems need to be updated and iterated continuously, so that the problem of new and old functions or new and old interface migration is solved. In the migration process, a great deal of difference exists between new and old interfaces, so that data are required to be compared and verified, but the situation that local flow is doubled and swelled easily occurs in the migration process, and the situation easily causes system blocking and influences the migration efficiency.
Disclosure of Invention
The embodiment of the application provides a resource allocation method and related equipment based on parallel routing, and further solves the problems that local flow doubling and surge easily occur in the migration process in the prior art to cause system blocking and influence the migration efficiency to at least a certain extent.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
According to an aspect of the embodiments of the present application, there is provided a resource allocation method based on parallel routing, including:
a first process is performed.
And acquiring the amount of resources occupied by the first flow and the amount of resources occupied by the second flow.
And if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is greater than a preset resource threshold, discarding the second process according to a preset rule.
And if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is not greater than a preset resource threshold value, executing the second process.
The specific steps of the first flow include:
and calling a first interface to acquire a first return value.
And cloning the first return value to obtain a cloned value.
The specific steps of the second flow include:
and calling a second interface to acquire a second return value.
Comparing the clone value with the second return value, and returning a comparison result.
In one embodiment of the present application, after the discarding the second flow according to the predetermined rule if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, the method further includes:
and if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is again smaller than a preset resource threshold value, re-executing the discarded second flow.
In one embodiment of the present application, if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, discarding the second flow according to a predetermined rule, including:
and discarding all the second processes corresponding to the situation that the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is larger than a preset resource threshold value.
In one embodiment of the present application, if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, discarding the second flow according to a predetermined rule, including:
and inputting the first return value, the resource amount occupied by the first flow and the resource amount occupied by the second flow into a screening model, and outputting a result of whether the second flow corresponding to the first return value is subjected to discarding treatment or not by the screening model.
In one embodiment of the present application, the training method of the screening model specifically includes:
and acquiring a flow data sample set, wherein each flow data sample is calibrated in advance to a corresponding result of whether discarding is performed or not.
And respectively inputting the data of each flow data sample into a screening model to obtain a result of whether the screening output is subjected to discarding treatment or not.
And if the result of whether the discarding process is inconsistent with the result of whether the discarding process is performed on the process data sample, which is calibrated in advance, after the data of the process data sample is input into the screening model, adjusting the screening coefficient until the result is consistent with the result of whether the discarding process is performed on the process data sample.
After the data of all the flow data samples are input into the screening model, the obtained result of whether the discarding process is performed is consistent with the result of whether the discarding process is performed or not, which is calibrated in advance for the flow data samples, and the training is finished.
In one embodiment of the present application, the comparing the clone value with the second return value, and returning the comparison result specifically includes:
and if the clone value is inconsistent with the second return value, returning alarm information.
In one embodiment of the present application, after the returning the alarm information if the clone value is inconsistent with the second return value, specifically includes:
and correcting the parameters of the second flow.
And calling a second interface through a second flow again to acquire a second return value.
According to an aspect of the embodiments of the present application, there is provided a resource allocation device based on parallel routing, including:
the first procedure execution module is used for executing a first procedure.
The resource total amount acquisition module is used for acquiring the resource amount occupied by the first flow and the resource amount occupied by the second flow.
And the second process discarding module is used for discarding the second process according to a preset rule if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is greater than a preset resource threshold.
And the second procedure execution module is used for executing the second procedure if the sum of the amount of resources occupied by the first procedure and the amount of resources occupied by the second procedure is not greater than a preset resource threshold value.
The specific steps of the first procedure execution module include:
and the first calling sub-module is used for calling the first interface to acquire a first return value.
And the clone value acquisition submodule is used for cloning the first return value to obtain a clone value.
The second process execution module specifically comprises the following steps:
and the second calling sub-module is used for calling a second interface and acquiring a second return value.
And the clone value comparison submodule is used for comparing the clone value with the second return value and returning a comparison result.
According to an aspect of the embodiments of the present application, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a parallel routing based resource allocation method as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors. And a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the parallel routing based resource allocation method as described in the above embodiments.
In the technical solutions provided in some embodiments of the present application, when the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, the second flow is discarded according to a predetermined rule, so that the amount of resources occupied by the traffic processed in the current period of time is reduced, the amount of resources occupied by the traffic can be controlled below the predetermined resource threshold in each period of time, and further, the problem that the system is blocked due to doubling and swelling of the local traffic in the migration process in the prior art, and the migration efficiency is affected is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. In the drawings:
fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solutions of the embodiments of the present application may be applied.
Fig. 2 schematically illustrates a flow chart of a parallel routing based resource allocation method according to one embodiment of the present application.
Fig. 3 is a flowchart of a specific implementation of step S100 in the parallel routing-based resource allocation method according to the corresponding embodiment of fig. 2.
Fig. 4 is a flowchart of a specific implementation of step S400 in the parallel routing-based resource allocation method according to the corresponding embodiment of fig. 2.
Fig. 5 schematically shows a block diagram of a parallel routing based resource allocation apparatus according to an embodiment of the present application.
Fig. 6 shows a structure of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the example embodiments may be embodied in many different forms and should not be construed as limited to the examples set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present application. One skilled in the relevant art will recognize, however, that the aspects of the application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solutions of the embodiments of the present application may be applied.
As shown in fig. 1, the system architecture may include a terminal device (such as one or more of the smartphone 101, tablet 102, and portable computer 103 shown in fig. 1, but of course, a desktop computer, etc.), a network 104, and a server 105. The network 104 is the medium used to provide communication links between the terminal devices and the server 105. The network 104 may include various connection types, such as wired communication links, wireless communication links, and the like.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 105 may be a server cluster formed by a plurality of servers.
A user may interact with the server 105 via the network 104 using a terminal device to receive or send messages or the like. The server 105 may be a server providing various services. For example, the user may upload traffic data to the server 105 using the terminal device 103 (which may also be the terminal device 101 or 102), and the server 105 may execute the first procedure. And acquiring the amount of resources occupied by the first flow and the amount of resources occupied by the second flow. And if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is greater than a preset resource threshold, discarding the second process according to a preset rule. And if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is not greater than a preset resource threshold value, executing the second process.
It should be noted that, the resource allocation method based on parallel routing provided in the embodiments of the present application is generally executed by the server 105, and accordingly, the resource allocation device based on parallel routing is generally disposed in the server 105. However, in other embodiments of the present application, the terminal device may also have a similar function as the server, so as to execute the parallel routing-based resource allocation scheme provided in the embodiments of the present application.
The implementation details of the technical solutions of the embodiments of the present application are described in detail below:
fig. 2 illustrates a flow chart of a parallel routing-based resource allocation method, which may be performed by a server, which may be the server illustrated in fig. 1, according to one embodiment of the present application. Referring to fig. 2, the resource allocation method based on parallel routing at least includes:
step S100, executing a first procedure.
Step S200, the amount of resources occupied by the first flow and the amount of resources occupied by the second flow are obtained.
And step S300, if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is greater than a preset resource threshold, discarding the second process according to a preset rule.
Step S400, if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is not greater than a predetermined resource threshold, executing the second flow.
In the embodiment of the application, the first procedure is executed first, and then the amount of resources occupied by the first procedure and the amount of resources occupied by the second procedure are acquired. And if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is greater than a preset resource threshold, discarding the second process according to a preset rule. And if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is not greater than a preset resource threshold value, executing the second process.
According to the embodiment of the application, when the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is larger than the preset resource threshold, the second flow is discarded according to the preset rule, so that the amount of resources occupied by the flow processed in the current period is reduced, the amount of resources occupied by the flow can be controlled below the preset resource threshold in each period, and the problem that the system is blocked due to the fact that local flow is doubled and swelled easily in the migration process in the prior art, and the migration efficiency is affected is solved.
The first procedure in step S100 is a procedure of calling the old interface to obtain the return value and cloning.
Specifically, in some embodiments, the specific implementation of step S100 may refer to fig. 3. Fig. 3 is a detailed description of step S100 in the parallel routing-based resource allocation method according to the corresponding embodiment of fig. 2, where step S100 may include the following steps:
step S110, call the first interface to obtain the first return value.
And step S120, cloning the first return value to obtain a cloned value.
That is, in the embodiment of the present application, the specific execution flow of step S100 is to call the first interface to obtain the first return value, and clone the first return value to obtain the clone value.
The first flow in step S400 is a flow of calling a new interface to obtain a return value and comparing the return value with the return value of an old interface.
Specifically, in some embodiments, the specific implementation of step S400 may refer to fig. 4. Fig. 4 is a detailed description of step S400 in the parallel routing-based resource allocation method according to the corresponding embodiment of fig. 2, where step S200 may include the following steps:
step S410, call the second interface, obtain the second return value.
Step S420, comparing the clone value with the second return value, and returning a comparison result.
That is, in the embodiment of the present application, the specific execution flow of step S400 is to call the second interface first, obtain the second return value, then compare the clone value with the second return value, and return the comparison result.
The first return value obtained in step S110 is used to compare the second return values obtained in step S420 and step S410 to determine that the second interface is not problematic, and the clone value obtained by cloning the first return value is used to asynchronously execute the first flow and the second flow, so that when the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, the second flow is discarded according to a predetermined rule.
In step S420, the returned comparison result has various forms, for example, in one embodiment, if the clone value is inconsistent with the second returned value, an alarm message is returned. And if the clone value is consistent with the second return value, returning normal information. For example, in another embodiment, if the clone value is inconsistent with the second return value, an error message is returned, and the second return value is corrected based on the error message. And if the clone value is consistent with the second return value, returning normal information.
In some embodiments of the present application, after step S400, the method further comprises:
and correcting the parameters of the second flow.
And calling a second interface through a second flow again to acquire a second return value.
In this embodiment, in addition to returning the error information, the parameters of the second flow are further modified to ensure that when the second interface is called, the second interface can return a correct second return value, so that the new and old interfaces can be better adapted.
In step S200, the manner of acquiring the amount of resources occupied by the first flow and the amount of resources occupied by the second flow may be that the server acquires them by itself or by the terminal device. The amount of resources occupied by the first flow may include threads occupied by the first flow, memory occupied by the first flow, bandwidth occupied by the first flow, and the like, and similarly the amount of resources occupied by the second flow may also include threads occupied by the second flow, memory occupied by the second flow, bandwidth occupied by the second flow, and the like.
In the subsequent embodiments of the present application, threads are described as examples.
In step S300, if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, it is proved that the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is already close to the critical value of the maximum resources that can be provided by the system, and at this time, the flow needs to be controlled, so as to avoid system jamming caused by flow swelling and affect the migration efficiency.
Taking the resource amount as the thread number as an example, when the sum of the thread number occupied by the first flow and the thread number occupied by the second flow is greater than a preset thread threshold value, it is proved that the total thread number occupied by the first flow and the second flow is close to the total thread number in the system thread pool, and if the total thread number is not controlled, a large number of flows cannot be allocated with thread resources, so that the system is blocked, even part of flows are interrupted, and the overall migration efficiency is affected.
The predetermined resource threshold may be a maximum amount of resources that can be provided by the system, a threshold close to the maximum amount of resources that can be provided by the system, or a threshold with a margin determined according to an application environment.
Taking the resource amount as the thread number as an example, the predetermined resource threshold may be the total number of threads in the system thread pool, for example, 32 threads, or a threshold close to the total number of threads in the system thread pool, for example, 30 threads, or a threshold with a margin determined according to the application environment, for example, 24 threads.
The predetermined rules may include various rules such as discarding all of the second flow, or selecting data that is important in the migration process to continue execution of the second flow.
Specifically, in some embodiments, a specific implementation of step S300 is as follows. The present embodiment is a detailed description of step S300 in the parallel routing-based resource allocation method according to the corresponding embodiment shown in fig. 2, where step S300 may include the following steps:
and inputting the first return value, the resource amount occupied by the first flow and the resource amount occupied by the second flow into a screening model, and outputting a result of whether the second flow corresponding to the first return value is subjected to discarding treatment or not by the screening model.
In this embodiment, the specific predetermined rule that the second flow performs the discard processing according to the predetermined rule is determined by a filtering model, the first return value, the amount of resources occupied by the first flow, and the amount of resources occupied by the second flow are input into the filtering model, a sub-model in the filtering model may extract key field information in the first return value, the importance of the data of the first flow that is currently executed may be determined by the key field information, and the amount of resources occupied by the first flow and the amount of resources occupied by the second flow may determine the total number of threads that need to be allocated when all flows are currently executed, so as to determine whether the second flow performs the discard processing.
The training method of the screening model specifically comprises the following steps:
and acquiring a flow data sample set, wherein each flow data sample is calibrated in advance to a corresponding result of whether discarding is performed or not.
And respectively inputting the data of each flow data sample into a screening model to obtain a result of whether the screening output is subjected to discarding treatment or not.
And if the result of whether the discarding process is inconsistent with the result of whether the discarding process is performed on the process data sample, which is calibrated in advance, after the data of the process data sample is input into the screening model, adjusting the screening coefficient until the result is consistent with the result of whether the discarding process is performed on the process data sample.
After the data of all the flow data samples are input into the screening model, the obtained result of whether the discarding process is performed is consistent with the result of whether the discarding process is performed or not, which is calibrated in advance for the flow data samples, and the training is finished.
In some embodiments of the present application, after step S300, the method further comprises:
and if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is again smaller than a preset resource threshold value, re-executing the discarded second flow.
In the migration process, the flow rate changes in a fluctuating manner, and when the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is again smaller than a predetermined resource threshold value, the second flow can be restarted for comparison.
In the present embodiment, the discarding process at step S300 means to temporarily stop the second flow of the process and store the obtained data. And after the resource quantity in the step is recovered to be normal, continuing the paused second flow so as to realize full flow comparison.
The following describes an embodiment of an apparatus of the present application, which may be used to perform the parallel routing-based resource allocation method in the above embodiment of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the resource allocation method based on parallel routing described in the present application.
Fig. 5 shows a block diagram of a parallel routing based resource allocation apparatus according to one embodiment of the present application.
Referring to fig. 5, a resource allocation apparatus 500 based on parallel routing according to an embodiment of the present application includes:
the first process execution module 510 is configured to execute a first process.
The total resource obtaining module 520 is configured to obtain the amount of resources occupied by the first flow and the amount of resources occupied by the second flow.
And a second flow Cheng Paoqi module 530, configured to discard the second flow according to a predetermined rule if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold.
And a second process execution module 540, configured to execute a second process if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is not greater than a predetermined resource threshold.
The specific steps of the first process execution module 510 include:
the first calling sub-module 511 is configured to call the first interface to obtain the first return value.
A clone value obtaining submodule 512, configured to clone the first return value to obtain a clone value.
The second process execution module 540 specifically includes the following steps:
and a second calling sub-module 541, configured to call the second interface, and obtain a second return value.
And a clone value comparison sub-module 542, configured to compare the clone value with the second return value, and return a comparison result.
Fig. 6 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
It should be noted that, the computer system of the electronic device shown in fig. 6 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 6, the computer system includes a central processing unit (Central Processing Unit, CPU) 1801, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1802 or a program loaded from a storage section 1808 into a random access Memory (Random Access Memory, RAM) 1803. In the RAM 1803, various programs and data required for system operation are also stored. The CPU 1801, ROM 1802, and RAM 1803 are connected to each other via a bus 1804. An Input/Output (I/O) interface 1805 is also connected to the bus 1804.
The following components are connected to the I/O interface 1805: an input section 1806 including a keyboard, a mouse, and the like; an output portion 1807 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and a speaker, etc.; a storage section 1808 including a hard disk or the like; and a communication section 1809 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 1809 performs communication processing via a network such as the internet. The drive 1810 is also connected to the I/O interface 1805 as needed. Removable media 1811, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memory, and the like, is installed as needed on drive 1810 so that a computer program read therefrom is installed as needed into storage portion 1808.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1809, and/or installed from the removable medium 1811. The computer programs, when executed by a Central Processing Unit (CPU) 1801, perform the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a usb disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. A resource allocation method based on parallel routing, comprising:
executing a first procedure;
acquiring the resource quantity occupied by the first flow and the resource quantity occupied by the second flow;
if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is greater than a preset resource threshold, discarding the second process according to a preset rule;
if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is not greater than a predetermined resource threshold, executing the second flow;
the specific steps of the first flow include:
calling a first interface to acquire a first return value;
cloning the first return value to obtain a cloned value;
the specific steps of the second flow include:
calling a second interface to acquire a second return value;
comparing the clone value with the second return value, and returning a comparison result;
if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, discarding the second flow according to a predetermined rule, including:
and inputting the first return value, the resource amount occupied by the first flow and the resource amount occupied by the second flow into a screening model, and outputting a result of whether the second flow corresponding to the first return value is subjected to discarding treatment or not by the screening model.
2. The parallel routing-based resource allocation method according to claim 1, wherein after discarding the second flow according to a predetermined rule if a sum of an amount of resources occupied by the first flow and an amount of resources occupied by the second flow is greater than a predetermined resource threshold, the method further comprises:
and if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is again smaller than a preset resource threshold value, re-executing the discarded second flow.
3. The method for allocating resources based on parallel routing as defined in claim 1, wherein if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, discarding the second flow according to a predetermined rule, specifically comprising:
and discarding all the second processes corresponding to the situation that the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is larger than a preset resource threshold value.
4. The resource allocation method based on parallel routing according to claim 1, wherein the training method of the screening model specifically comprises:
acquiring a flow data sample set, wherein each flow data sample is calibrated in advance to a corresponding result of whether discarding is performed or not;
respectively inputting the data of each flow data sample into a screening model to obtain a result of whether the screening output is subjected to discarding treatment or not;
if the result of whether to discard the flow data sample is inconsistent with the result of whether to discard the flow data sample calibrated in advance after the data of the flow data sample is input into the screening model, the screening coefficient is adjusted until the result is consistent with the result of whether to discard the flow data sample;
after the data of all the flow data samples are input into the screening model, the obtained result of whether the discarding process is performed is consistent with the result of whether the discarding process is performed or not, which is calibrated in advance for the flow data samples, and the training is finished.
5. The parallel routing-based resource allocation method according to claim 1, wherein the comparing the clone value with the second return value and returning a comparison result specifically includes:
and if the clone value is inconsistent with the second return value, returning alarm information.
6. The parallel routing-based resource allocation method according to claim 5, wherein after the returning of the alarm information if the clone value is inconsistent with the second return value, specifically comprising:
correcting parameters of the second flow;
and calling a second interface through a second flow again to acquire a second return value.
7. A parallel routing-based resource allocation apparatus, characterized in that the parallel routing-based resource allocation apparatus comprises:
the first procedure execution module is used for executing a first procedure;
the resource total amount acquisition module is used for acquiring the resource amount occupied by the first flow and the resource amount occupied by the second flow;
the second process discarding module is configured to discard the second process according to a predetermined rule if the sum of the amount of resources occupied by the first process and the amount of resources occupied by the second process is greater than a predetermined resource threshold;
a second procedure execution module, configured to execute a second procedure if a sum of an amount of resources occupied by the first procedure and an amount of resources occupied by the second procedure is not greater than a predetermined resource threshold;
the specific steps of the first procedure execution module include:
the first calling sub-module is used for calling the first interface to acquire a first return value;
a clone value obtaining submodule for cloning the first return value to obtain a clone value;
the second process execution module specifically comprises the following steps:
the second calling sub-module is used for calling a second interface and acquiring a second return value;
the cloning value comparison submodule is used for comparing the cloning value with the second return value and returning a comparison result;
if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, discarding the second flow according to a predetermined rule, including:
and inputting the first return value, the resource amount occupied by the first flow and the resource amount occupied by the second flow into a screening model, and outputting a result of whether the second flow corresponding to the first return value is subjected to discarding treatment or not by the screening model.
8. A computer readable medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the parallel routing based resource allocation method according to any of claims 1 to 6.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the parallel routing based resource allocation method of any of claims 1 to 6.
CN202210701835.4A 2022-06-20 2022-06-20 Resource allocation method based on parallel routing and related equipment Active CN115174495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210701835.4A CN115174495B (en) 2022-06-20 2022-06-20 Resource allocation method based on parallel routing and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210701835.4A CN115174495B (en) 2022-06-20 2022-06-20 Resource allocation method based on parallel routing and related equipment

Publications (2)

Publication Number Publication Date
CN115174495A CN115174495A (en) 2022-10-11
CN115174495B true CN115174495B (en) 2023-06-16

Family

ID=83487906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210701835.4A Active CN115174495B (en) 2022-06-20 2022-06-20 Resource allocation method based on parallel routing and related equipment

Country Status (1)

Country Link
CN (1) CN115174495B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018028746A (en) * 2016-08-16 2018-02-22 富士通株式会社 Virtual machine management program, virtual machine management method, and virtual machine management device
CN108694104A (en) * 2017-04-12 2018-10-23 北京京东尚科信息技术有限公司 A kind of interface function contrast test method, apparatus, electronic equipment and storage medium
CN114385353A (en) * 2021-12-23 2022-04-22 中国电信股份有限公司 Resource scheduling method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109600384B (en) * 2018-12-28 2021-08-03 江苏满运软件科技有限公司 Flow switching method, system, equipment and storage medium in RPC interface upgrading
CN111552632B (en) * 2020-03-27 2024-03-19 北京奇艺世纪科技有限公司 Interface testing method and device
CN112363944A (en) * 2020-11-20 2021-02-12 上海悦易网络信息技术有限公司 Method and equipment for comparing return values of multiple environment interfaces
CN114124819B (en) * 2021-10-22 2024-02-09 北京乐我无限科技有限责任公司 Flow distribution control method and device, storage medium and computer equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018028746A (en) * 2016-08-16 2018-02-22 富士通株式会社 Virtual machine management program, virtual machine management method, and virtual machine management device
CN108694104A (en) * 2017-04-12 2018-10-23 北京京东尚科信息技术有限公司 A kind of interface function contrast test method, apparatus, electronic equipment and storage medium
CN114385353A (en) * 2021-12-23 2022-04-22 中国电信股份有限公司 Resource scheduling method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115174495A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN109344142B (en) Data processing method, device, electronic equipment and storage medium
CN109960650B (en) Big data-based application program evaluation method, device, medium and electronic equipment
US20210192217A1 (en) Method and apparatus for processing video
US20220156096A1 (en) Enhancing Parameter-Less Exit-Calls from a Command Line Interface
CN107784195A (en) Data processing method and device
CN115174495B (en) Resource allocation method based on parallel routing and related equipment
CN111932348B (en) Alarm method and device for abnormal order, electronic equipment and readable medium
CN111158907A (en) Data processing method and device, electronic equipment and storage medium
CN112667368A (en) Task data processing method and device
CN115687283A (en) Log-based playback method and device, electronic equipment and medium
CN115408297A (en) Test method, device, equipment and medium
CN114595047A (en) Batch task processing method and device
CN114924937A (en) Batch task processing method and device, electronic equipment and computer readable medium
CN110351330B (en) Data uploading method and device, computer equipment and storage medium
CN114385354A (en) Weight calculation method, system and medium based on server resource use condition
CN114064403A (en) Task delay analysis processing method and device
CN114764627A (en) Data contribution capacity determination method and device based on transverse joint learning participants
CN113568733A (en) Resource allocation method, device, electronic equipment and storage medium
CN111291186A (en) Context mining method and device based on clustering algorithm and electronic equipment
CN110716972A (en) Method and device for processing error of high-frequency calling external interface
CN113420170B (en) Multithreading storage method, device, equipment and medium for big data image
CN113468053B (en) Application system testing method and device
CN112749076B (en) Test method and device and electronic equipment
CN111460269B (en) Information pushing method and device
CN109005413B (en) Probability estimation method and device in arithmetic coding and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant