CN115174495A - Resource allocation method based on parallel routing and related equipment - Google Patents

Resource allocation method based on parallel routing and related equipment Download PDF

Info

Publication number
CN115174495A
CN115174495A CN202210701835.4A CN202210701835A CN115174495A CN 115174495 A CN115174495 A CN 115174495A CN 202210701835 A CN202210701835 A CN 202210701835A CN 115174495 A CN115174495 A CN 115174495A
Authority
CN
China
Prior art keywords
flow
resource
occupied
return value
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210701835.4A
Other languages
Chinese (zh)
Other versions
CN115174495B (en
Inventor
王旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202210701835.4A priority Critical patent/CN115174495B/en
Publication of CN115174495A publication Critical patent/CN115174495A/en
Application granted granted Critical
Publication of CN115174495B publication Critical patent/CN115174495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/78Architectures of resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds

Abstract

The embodiment of the application provides a resource allocation method based on parallel routing and related equipment. The resource allocation method based on the parallel routing comprises the following steps: executing a first process; acquiring the resource amount occupied by the first flow and the resource amount occupied by the second flow; if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is larger than a preset resource threshold, abandoning the second flow according to a preset rule; if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is not greater than the preset resource threshold, executing the second flow; wherein the first process comprises: calling a first interface to obtain a first return value; cloning the first return value to obtain a clone value; the second process includes: calling a second interface to obtain a second return value; and comparing the clone value with the second return value, and returning a comparison result. The embodiment of the application can solve the problem that the system is blocked and the migration efficiency is influenced due to the fact that local flow is doubled and suddenly expanded.

Description

Resource allocation method based on parallel routing and related equipment
Technical Field
The present application relates to the field of computer and communication technologies, and in particular, to a resource allocation method based on parallel routing and a related device.
Background
Currently, in the internet field, many systems need to be updated and iterated continuously, so that the problem of migration of new and old functions or new and old interfaces exists. In the migration process, a large number of differences exist between the new interface and the old interface, so that data needs to be compared and verified, but the situation that local flow is doubled and suddenly increased easily occurs in the migration process, and the situation easily causes system blockage and influences the migration efficiency.
Disclosure of Invention
The embodiment of the application provides a resource allocation method based on parallel routing and related equipment, and further can overcome the problem that the system is blocked and the migration efficiency is influenced due to the fact that local flow is easily doubled and suddenly expanded in the migration process in the prior art at least to a certain extent.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided a parallel routing-based resource allocation method, including:
the first process is executed.
And acquiring the resource amount occupied by the first flow and the resource amount occupied by the second flow.
And if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is greater than a preset resource threshold, discarding the second flow according to a preset rule.
And if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is not more than a preset resource threshold, executing the second flow.
The first process comprises the following specific steps:
and calling the first interface to obtain a first return value.
And cloning the first return value to obtain a clone value.
The second process comprises the following specific steps:
and calling a second interface to obtain a second return value.
And comparing the clone value with the second return value, and returning a comparison result.
In an embodiment of the present application, after the discarding process is performed on the second flow according to a predetermined rule if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, the method further includes:
and if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is smaller than the preset resource threshold value again, executing the abandoned second flow again.
In an embodiment of the present application, if a sum of a resource amount occupied by the first flow and a resource amount occupied by the second flow is greater than a predetermined resource threshold, the discarding process of the second flow according to a predetermined rule specifically includes:
and discarding all corresponding second flows when the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is greater than a preset resource threshold.
In an embodiment of the present application, if a sum of a resource amount occupied by the first flow and a resource amount occupied by the second flow is greater than a predetermined resource threshold, the discarding process of the second flow according to a predetermined rule specifically includes:
and inputting the first return value, the resource amount occupied by the first flow and the resource amount occupied by the second flow into a screening model, and outputting a result whether the second flow corresponding to the first return value is discarded or not by the screening model.
In an embodiment of the present application, the training method of the screening model specifically includes:
acquiring a process data sample set, wherein each process data sample is calibrated in advance to determine whether a discarding result is performed.
And respectively inputting the data of each flow data sample into a screening model to obtain the result of whether discarding is performed or not in the screening output.
And if the obtained result of judging whether to perform abandoning treatment after the data of the process data sample is input into the screening model is inconsistent with the result of judging whether to perform abandoning treatment calibrated in advance on the process data sample, adjusting the screening coefficient until the obtained result is consistent with the result of judging whether to perform abandoning treatment.
And when the data of all the process data samples are input into the screening model, the obtained result of whether discarding is performed is consistent with the result of whether discarding is performed calibrated on the process data samples in advance, and the training is finished.
In an embodiment of the application, the comparing the clone value with the second return value, and returning a comparison result specifically includes:
and if the clone value is inconsistent with the second return value, returning alarm information.
In an embodiment of the present application, after the returning the alarm information if the clone value is inconsistent with the second return value, the method specifically includes:
and correcting the parameters of the second process.
And calling a second interface again through the second flow to obtain a second return value.
According to an aspect of the embodiments of the present application, there is provided a parallel routing-based resource allocation apparatus, including:
and the first process execution module is used for executing the first process.
And the total resource amount acquisition module is used for acquiring the resource amount occupied by the first flow and the resource amount occupied by the second flow.
And the second flow discarding module is used for discarding the second flow according to a preset rule if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is greater than a preset resource threshold.
And the second flow executing module is used for executing the second flow if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is not greater than a preset resource threshold.
The specific steps of the first flow execution module include:
and the first calling submodule is used for calling the first interface to acquire the first return value.
And the clone value acquisition submodule is used for cloning the first return value to obtain a clone value.
The second process execution module comprises the following specific steps:
and the second calling submodule is used for calling a second interface and acquiring a second return value.
And the clone value comparison submodule is used for comparing the clone value with the second return value and returning a comparison result.
According to an aspect of the embodiments of the present application, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the parallel routing based resource allocation method as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors. A storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the parallel routing based resource allocation method as described in the above embodiments.
In the technical scheme provided by some embodiments of the application, when the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, the second flow is discarded according to a predetermined rule, so that the amount of resources occupied by the flow processed in the current period is reduced, the amount of resources occupied by the flow can be controlled below the predetermined resource threshold in each time period, and the problems that the system is blocked and the migration efficiency is influenced due to the fact that local flow is expanded by double times in the migration process in the prior art are solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
Fig. 2 schematically shows a flow chart of a parallel routing based resource allocation method according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating a specific implementation of step S100 in the parallel routing-based resource allocation method according to the corresponding embodiment in fig. 2.
Fig. 4 is a flowchart illustrating a specific implementation of step S400 in the parallel routing-based resource allocation method according to the corresponding embodiment in fig. 2.
Fig. 5 schematically shows a block diagram of a parallel routing based resource allocation apparatus according to an embodiment of the present application.
FIG. 6 illustrates the structure of a computer system suitable for use to implement the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture may include a terminal device (e.g., one or more of a smartphone 101, a tablet computer 102, and a portable computer 103 shown in fig. 1, but may also be a desktop computer, etc.), a network 104, and a server 105. The network 104 serves as a medium for providing communication links between terminal devices and the server 105. Network 104 may include various connection types, such as wired communication links, wireless communication links, and so forth.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
A user may use a terminal device to interact with the server 105 over the network 104 to receive or send messages or the like. The server 105 may be a server that provides various services. For example, the user uploads traffic data to the server 105 using the terminal device 103 (which may also be the terminal device 101 or 102), and the server 105 may execute the first procedure. And acquiring the resource amount occupied by the first flow and the resource amount occupied by the second flow. And if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is greater than a preset resource threshold, discarding the second flow according to a preset rule. And if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is not more than a preset resource threshold, executing the second flow.
It should be noted that the parallel routing-based resource allocation method provided in the embodiment of the present application is generally executed by the server 105, and accordingly, the parallel routing-based resource allocation apparatus is generally disposed in the server 105. However, in other embodiments of the present application, the terminal device may also have a similar function as the server, so as to execute the solution of resource allocation based on parallel routing provided in the embodiments of the present application.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 2 shows a flowchart of a parallel routing based resource allocation method according to an embodiment of the present application, which may be performed by a server, which may be the server shown in fig. 1. Referring to fig. 2, the parallel routing-based resource allocation method at least includes:
in step S100, a first process is executed.
Step S200, the resource amount occupied by the first process and the resource amount occupied by the second process are obtained.
Step S300, if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is larger than a preset resource threshold, the second flow is discarded according to a preset rule.
Step S400, if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is not more than a preset resource threshold, executing the second flow.
In the embodiment of the application, the first flow is executed first, and then the resource amount occupied by the first flow and the resource amount occupied by the second flow are acquired. And if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is greater than a preset resource threshold, discarding the second flow according to a preset rule. And if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is not greater than a preset resource threshold, executing the second flow.
The embodiment of the application is through the resource amount that first flow occupies with when the sum of the resource amount that the second flow occupies is greater than predetermined resource threshold, will the abandonment processing is done according to predetermined rule to the second flow, has reduced the resource amount that the flow that its current period was handled was shared for every time quantum can both be with the resource amount control that the flow was shared below predetermined resource threshold, and then the problem that local flow double-rising leads to the system card to be pause, influences the efficiency of migration appears easily among the migration process of prior art.
The first flow in step S100 is a flow that calls an old interface to obtain a return value and clones.
Specifically, in some embodiments, reference may be made to fig. 3 for a specific implementation of step S100. Fig. 3 is a detailed description of step S100 in the parallel routing based resource allocation method according to the corresponding embodiment shown in fig. 2, where step S100 may include the following steps:
step S110, a first interface is called to obtain a first return value.
And S120, cloning the first return value to obtain a clone value.
That is, in the embodiment of the present application, the specific execution flow of step S100 is to first call the first interface to obtain the first return value, and then clone the first return value to obtain the clone value.
The first process in step S400 is a process of calling a new interface to obtain a return value and comparing the return value with the return value of an old interface.
In particular, in some embodiments, a specific implementation of step S400 may be found in fig. 4. Fig. 4 is a detailed description of step S400 in the parallel routing based resource allocation method according to the corresponding embodiment in fig. 2, where step S200 may include the following steps:
step S410, calling a second interface to obtain a second return value.
And step S420, comparing the clone value with the second return value, and returning a comparison result.
That is, in the embodiment of the present application, the specific execution flow of step S400 is to first invoke the second interface, obtain the second return value, compare the clone value with the second return value, and return the comparison result.
The first return value obtained in step S110 is used to compare the second return values obtained in step S420 and step S410 to determine that there is no problem with the second interface, and the cloning of the first return value into the clone value is to enable the first flow and the second flow to execute asynchronously, so that when the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, the second flow is discarded according to a predetermined rule.
In step S420, the returned comparison result has various forms, for example, in one embodiment, if the clone value is inconsistent with the second return value, an alarm message is returned. And if the clone value is consistent with the second return value, returning normal information. For example, in another embodiment, if the clone value is not consistent with the second return value, an error message is returned, and the second return value is corrected according to the error message. And if the clone value is consistent with the second return value, returning normal information.
In some embodiments of the present application, after step S400, the method further comprises:
and correcting the parameters of the second process.
And calling a second interface again through the second flow to obtain a second return value.
In this embodiment, in addition to returning the error information, the parameter of the second flow is also modified to ensure that the second interface can return a correct second return value when the second interface is called, so that the new and old interfaces can be better adapted.
In step S200, the manner of acquiring the resource amount occupied by the first flow and the resource amount occupied by the second flow may be that the server acquires the resource amounts by itself or by using the terminal device. The resource amount occupied by the first flow may include a thread occupied by the first flow, a memory occupied by the first flow, a bandwidth occupied by the first flow, and the like, and similarly, the resource amount occupied by the second flow may also include a thread occupied by the second flow, a memory occupied by the second flow, a bandwidth occupied by the second flow, and the like.
In the following embodiments of the present application, the description is given by taking a thread as an example.
In step S300, if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than the predetermined resource threshold, it is proved that the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is already close to the critical value of the maximum resource that can be provided by the system, and at this time, the flow needs to be controlled, so as to avoid that the system is stuck due to a sudden increase in flow, which affects the migration efficiency.
Taking the resource amount as the thread number as an example, when the sum of the thread number occupied by the first flow and the thread number of the second flow is greater than a predetermined thread threshold, it is proved that the bus thread number occupied by the first flow and the second flow is already close to the total thread number in the system thread pool, and if not managed, a large number of flows cannot be allocated with thread resources, so that the system is stuck, even part of the flows are interrupted, and the overall efficiency of migration is affected.
The predetermined resource threshold may be a maximum amount of resources that can be provided by the system, a threshold close to a maximum amount of resources that can be provided by the system, or a threshold with a margin determined according to an application environment.
Taking the resource amount as the number of threads as an example, the predetermined resource threshold may be a total number of threads in the system thread pool, for example, 32 threads, a critical value close to the total number of threads in the system thread pool, for example, 30 threads, or a threshold with a margin determined according to an application environment, for example, 24 threads.
The predetermined rule may include a plurality of rules, such as discarding all the second processes, or selecting more important data in the migration process to continue the second processes.
Specifically, in some embodiments, the specific implementation of step S300 is as follows. This embodiment is a detailed description of step S300 in the parallel routing based resource allocation method according to the corresponding embodiment shown in fig. 2, where step S300 may include the following steps:
and inputting the first return value, the resource amount occupied by the first flow and the resource amount occupied by the second flow into a screening model, and outputting a result whether the second flow corresponding to the first return value is discarded or not by the screening model.
In this embodiment, the specific predetermined rule for discarding the second flow according to the predetermined rule is that the second flow is judged by a screening model, the first return value, the amount of resources occupied by the first flow, and the amount of resources occupied by the second flow are input into the screening model, a sub-model in the screening model can extract key field information in the first return value, the importance of data of the currently executed first flow can be judged by the key field information, and the total number of threads to be allocated to currently executed all flows can be judged by the amount of resources occupied by the first flow and the amount of resources occupied by the second flow, so as to determine whether the second flow is discarded.
The training method of the screening model specifically comprises the following steps:
and acquiring a process data sample set, wherein each process data sample is calibrated in advance to determine whether a corresponding discarding processing result is performed.
And respectively inputting the data of each flow data sample into a screening model to obtain the result of whether discarding is performed or not in the screening output.
And if the obtained result of judging whether to perform abandoning treatment after the data of the process data sample is input into the screening model is inconsistent with the result of judging whether to perform abandoning treatment calibrated in advance on the process data sample, adjusting the screening coefficient until the obtained result is consistent with the result of judging whether to perform abandoning treatment.
And when the data of all the process data samples are input into the screening model, the obtained result of judging whether to perform abandoning treatment is consistent with the result of judging whether to perform abandoning treatment calibrated in advance on the process data samples, and the training is finished.
In some embodiments of the present application, after step S300, the method further comprises:
and if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is smaller than the preset resource threshold value again, executing the abandoned second flow again.
In the migration process, the change of the flow rate is fluctuated, and when the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is smaller than the preset resource threshold value again, the second flow can be restarted for comparison.
In the present embodiment, the discarding process at step S300 means to temporarily stop the process second flow and store the resultant data. And after the resource amount in the step is recovered to be normal, continuing the suspended second flow to realize the full flow comparison.
The following describes an apparatus embodiment of the present application, which may be used to execute the parallel routing-based resource allocation method in the foregoing embodiment of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the resource allocation method based on parallel routing described above in the present application.
Fig. 5 shows a block diagram of a parallel routing based resource allocation apparatus according to an embodiment of the present application.
Referring to fig. 5, a parallel routing-based resource allocation apparatus 500 according to an embodiment of the present application includes:
a first process executing module 510 for executing the first process.
The total resource amount obtaining module 520 is configured to obtain a resource amount occupied by the first flow and a resource amount occupied by the second flow.
A second flow discarding module 530, configured to discard the second flow according to a predetermined rule if a sum of the amount of the resource occupied by the first flow and the amount of the resource occupied by the second flow is greater than a predetermined resource threshold.
A second flow executing module 540, configured to execute the second flow if a sum of the amount of the resource occupied by the first flow and the amount of the resource occupied by the second flow is not greater than a predetermined resource threshold.
The first process execution module 510 specifically includes:
the first calling submodule 511 is configured to call the first interface to obtain the first return value.
And a clone value obtaining submodule 512 for cloning the first return value to obtain a clone value.
The second process executing module 540 specifically includes:
and a second calling submodule 541, configured to call a second interface, and obtain a second return value.
And the clone value comparison submodule 542 is configured to compare the clone value with the second return value, and return a comparison result.
FIG. 6 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system of the electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system includes a Central Processing Unit (CPU) 1801, which can perform various appropriate actions and processes, such as executing the method described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1802 or a program loaded from a storage portion 1808 into a Random Access Memory (RAM) 1803. In the RAM 1803, various programs and data necessary for system operation are also stored. The CPU 1801, ROM 1802, and RAM 1803 are connected to each other via a bus 1804. An Input/Output (I/O) interface 1805 is also connected to bus 1804.
The following components are connected to the I/O interface 1805: an input portion 1806 including a keyboard, a mouse, and the like; an output section 1807 including a Display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 1808 including a hard disk and the like; and a communication section 1809 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1809 performs communication processing via a network such as the internet. A driver 1810 is also connected to the I/O interface 1805 as needed. A removable medium 1811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1810 as necessary so that a computer program read out therefrom is installed in the storage section 1808 as necessary.
In particular, according to embodiments of the present application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1809, and/or installed from the removable media 1811. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 1801.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer-readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiment; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, and may also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A resource allocation method based on parallel routing is characterized by comprising the following steps:
executing a first process;
acquiring the resource amount occupied by the first flow and the resource amount occupied by the second flow;
if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is greater than a preset resource threshold, discarding the second flow according to a preset rule;
if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is not greater than a preset resource threshold, executing the second flow;
the first process comprises the following specific steps:
calling a first interface to obtain a first return value;
cloning the first return value to obtain a clone value;
the second process comprises the following specific steps:
calling a second interface to obtain a second return value;
and comparing the clone value with the second return value, and returning a comparison result.
2. The method for resource allocation based on parallel routing according to claim 1, wherein after the discarding process is performed on the second flow according to a predetermined rule if the sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, the method further comprises:
and if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is smaller than the preset resource threshold value again, executing the abandoned second flow again.
3. The method according to claim 1, wherein if a sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, the method discards the second flow according to a predetermined rule, and specifically includes:
and discarding all corresponding second flows when the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is greater than a preset resource threshold.
4. The method according to claim 1, wherein if a sum of the amount of resources occupied by the first flow and the amount of resources occupied by the second flow is greater than a predetermined resource threshold, the method discards the second flow according to a predetermined rule, and specifically includes:
and inputting the first return value, the resource amount occupied by the first flow and the resource amount occupied by the second flow into a screening model, and outputting a result whether the second flow corresponding to the first return value is discarded or not by the screening model.
5. The parallel routing-based resource allocation method according to claim 4, wherein the training method of the screening model specifically comprises:
acquiring a process data sample set, wherein each process data sample calibrates a corresponding result of whether abandonment processing is performed in advance;
respectively inputting the data of each flow data sample into a screening model to obtain the result of whether abandoning treatment is performed or not in the screening output;
if the rejection processing result obtained after the data of the process data sample is input into the screening model is inconsistent with the rejection processing result calibrated in advance for the process data sample, adjusting the screening coefficient until the rejection processing result is consistent with the rejection processing result;
and when the data of all the process data samples are input into the screening model, the obtained result of whether discarding is performed is consistent with the result of whether discarding is performed calibrated on the process data samples in advance, and the training is finished.
6. The method for resource allocation based on parallel routing according to claim 1, wherein the comparing the clone value with the second return value and returning a comparison result specifically comprises:
and if the clone value is inconsistent with the second return value, returning alarm information.
7. The parallel routing-based resource allocation method according to claim 1, wherein after the returning of the alarm information if the clone value is inconsistent with the second return value, the method specifically comprises:
correcting the parameters of the second process;
and calling a second interface again through the second flow to obtain a second return value.
8. A parallel routing based resource allocation apparatus, wherein the parallel routing based resource allocation apparatus comprises:
the first flow executing module is used for executing the first flow;
the resource total acquiring module is used for acquiring the resource amount occupied by the first flow and the resource amount occupied by the second flow;
the second flow abandoning module is used for abandoning the second flow according to a preset rule if the sum of the resource amount occupied by the first flow and the resource amount occupied by the second flow is greater than a preset resource threshold;
a second flow executing module, configured to execute a second flow if a sum of the amount of the resource occupied by the first flow and the amount of the resource occupied by the second flow is not greater than a predetermined resource threshold;
the first process execution module comprises the following specific steps:
the first calling submodule is used for calling a first interface to obtain a first return value;
a clone value obtaining submodule for cloning the first return value to obtain a clone value;
the second process execution module comprises the following specific steps:
the second calling submodule is used for calling a second interface and acquiring a second return value;
and the clone value comparison submodule is used for comparing the clone value with the second return value and returning a comparison result.
9. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for parallel routing based resource allocation according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the parallel routing based resource allocation method of any of claims 1 to 7.
CN202210701835.4A 2022-06-20 2022-06-20 Resource allocation method based on parallel routing and related equipment Active CN115174495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210701835.4A CN115174495B (en) 2022-06-20 2022-06-20 Resource allocation method based on parallel routing and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210701835.4A CN115174495B (en) 2022-06-20 2022-06-20 Resource allocation method based on parallel routing and related equipment

Publications (2)

Publication Number Publication Date
CN115174495A true CN115174495A (en) 2022-10-11
CN115174495B CN115174495B (en) 2023-06-16

Family

ID=83487906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210701835.4A Active CN115174495B (en) 2022-06-20 2022-06-20 Resource allocation method based on parallel routing and related equipment

Country Status (1)

Country Link
CN (1) CN115174495B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018028746A (en) * 2016-08-16 2018-02-22 富士通株式会社 Virtual machine management program, virtual machine management method, and virtual machine management device
CN108694104A (en) * 2017-04-12 2018-10-23 北京京东尚科信息技术有限公司 A kind of interface function contrast test method, apparatus, electronic equipment and storage medium
CN109600384A (en) * 2018-12-28 2019-04-09 江苏满运软件科技有限公司 Flow switching method, system, equipment and storage medium in RPC interface upgrade
CN111552632A (en) * 2020-03-27 2020-08-18 北京奇艺世纪科技有限公司 Interface testing method and device
CN112363944A (en) * 2020-11-20 2021-02-12 上海悦易网络信息技术有限公司 Method and equipment for comparing return values of multiple environment interfaces
CN114124819A (en) * 2021-10-22 2022-03-01 北京乐我无限科技有限责任公司 Flow distribution control method and device, storage medium and computer equipment
CN114385353A (en) * 2021-12-23 2022-04-22 中国电信股份有限公司 Resource scheduling method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018028746A (en) * 2016-08-16 2018-02-22 富士通株式会社 Virtual machine management program, virtual machine management method, and virtual machine management device
CN108694104A (en) * 2017-04-12 2018-10-23 北京京东尚科信息技术有限公司 A kind of interface function contrast test method, apparatus, electronic equipment and storage medium
CN109600384A (en) * 2018-12-28 2019-04-09 江苏满运软件科技有限公司 Flow switching method, system, equipment and storage medium in RPC interface upgrade
CN111552632A (en) * 2020-03-27 2020-08-18 北京奇艺世纪科技有限公司 Interface testing method and device
CN112363944A (en) * 2020-11-20 2021-02-12 上海悦易网络信息技术有限公司 Method and equipment for comparing return values of multiple environment interfaces
CN114124819A (en) * 2021-10-22 2022-03-01 北京乐我无限科技有限责任公司 Flow distribution control method and device, storage medium and computer equipment
CN114385353A (en) * 2021-12-23 2022-04-22 中国电信股份有限公司 Resource scheduling method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115174495B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
US20150213082A1 (en) Dynamic interest-based notifications
US8606905B1 (en) Automated determination of system scalability and scalability constraint factors
US20200073913A1 (en) Method and apparatus for processing data sequence
CN110753112A (en) Elastic expansion method and device of cloud service
CN113114504B (en) Method, apparatus, device, medium and product for allocating resources
CN112559125A (en) Container application migration method and device, electronic equipment and computer readable medium
CN113722055A (en) Data processing method and device, electronic equipment and computer readable medium
EP3399413B1 (en) Component logical threads quantity adjustment method and device
CN111866101A (en) Access request processing method and device, storage medium and electronic equipment
CN110413210B (en) Method, apparatus and computer program product for processing data
CN111190719A (en) Method, device, medium and electronic equipment for optimizing cluster resource allocation
CN115174495A (en) Resource allocation method based on parallel routing and related equipment
CN112667368A (en) Task data processing method and device
CN114793197B (en) Network resource allocation method, device, equipment and storage medium based on NFV
CN107071014B (en) Resource adjusting method and device
CN114327918A (en) Method and device for adjusting resource amount, electronic equipment and storage medium
CN113590357A (en) Method and device for adjusting connection pool, computer equipment and storage medium
CN113064620A (en) Method and device for processing system data
CN111767085B (en) Storm platform parameter configuration method and apparatus
CN111698132B (en) Method, apparatus, device and medium for controlling heartbeat events in a cluster
WO2024077916A1 (en) Video screenshot acquiring method and apparatus
CN113014955B (en) Video frame processing method and device, electronic equipment and computer readable storage medium
CN111935671A (en) Event information acquisition method and device
CN115550276A (en) Traffic discarding method based on parallel routing and related equipment
CN113901363A (en) Cloud server information analysis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant