CN112968966A - Scheduling method, scheduling device, electronic equipment and storage medium - Google Patents

Scheduling method, scheduling device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112968966A
CN112968966A CN202110219367.2A CN202110219367A CN112968966A CN 112968966 A CN112968966 A CN 112968966A CN 202110219367 A CN202110219367 A CN 202110219367A CN 112968966 A CN112968966 A CN 112968966A
Authority
CN
China
Prior art keywords
target
scheduling
vms
weight
analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110219367.2A
Other languages
Chinese (zh)
Other versions
CN112968966B (en
Inventor
刘成乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110219367.2A priority Critical patent/CN112968966B/en
Publication of CN112968966A publication Critical patent/CN112968966A/en
Application granted granted Critical
Publication of CN112968966B publication Critical patent/CN112968966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4505Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols
    • H04L61/4511Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols using domain name system [DNS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure provides a scheduling method, a scheduling device, electronic equipment and a storage medium, and relates to the technical field of cloud computing such as cloud network technology and private cloud. The specific implementation scheme is as follows: acquiring a weight value, wherein the weight value is used for representing address scheduling priority information; receiving request messages sent by a plurality of Virtual Machines (VMs) deployed in a Virtual Private Cloud (VPC); according to the proportion of the weight values in the total weight, scheduling message analysis is carried out on the multiple VMs, and a target VM is selected from the multiple VMs; and analyzing the request message sent by the target VM, and performing targeted feedback analysis and recording to the target VM. By adopting the method and the device, intelligent scheduling and targeted feedback of request message analysis can be realized.

Description

Scheduling method, scheduling device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to the technical fields of cloud network technologies and private clouds.
Background
In network communication, because the IP address is a string of numbers, it is not easy for the user to record, and the domain name can be used to log in the website to realize network communication. The Domain Name is the identifier of a certain computer or a certain group of computers on the internet, which is composed of a string of names separated by 'points', and the Domain Name can be converted into the IP address of a server through a Domain Name service System (DNS), so that people can conveniently and easily access the service on the internet, and the addressing and network communication can be conveniently carried out through the IP address.
The DNS may also parse a request packet sent by a Virtual Machine (VM) deployed in a Virtual Private Cloud (VPC), however, currently, intelligent scheduling for parsing the request packet cannot be implemented.
Disclosure of Invention
The disclosure provides a scheduling method, a scheduling device, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a scheduling method, including:
acquiring a weight value, wherein the weight value is used for representing address scheduling priority information;
receiving request messages sent by a plurality of Virtual Machines (VM) deployed in a Virtual Private Cloud (VPC);
according to the proportion of the weight values in the total weight, scheduling message analysis is carried out on the multiple VMs, and a target VM is selected from the multiple VMs;
and analyzing the request message sent by the target VM, and performing targeted feedback analysis and recording to the target VM.
According to another aspect of the present disclosure, there is provided a scheduling apparatus, including:
the system comprises an acquisition module, a judging module and a judging module, wherein the acquisition module is used for acquiring a weight value, and the weight value is used for representing address scheduling priority information;
the virtual private cloud VPC comprises a receiving module, a sending module and a receiving module, wherein the receiving module is used for receiving request messages sent by a plurality of Virtual Machines (VM) deployed in a Virtual Private Cloud (VPC);
the scheduling module is used for scheduling the message analysis of the VMs according to the proportion of the weight values in the total weight sum, and selecting a target VM from the VMs;
and the feedback module is used for analyzing the request message sent by the target VM and performing targeted feedback analysis and recording to the target VM.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method provided by any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform a method provided by any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by a processor, implement the method provided by any one of the embodiments of the present disclosure.
By adopting the method and the device, the weight value can be obtained, and the weight value is used for representing address scheduling priority information; receiving request messages sent by a plurality of VMs deployed in a VPC; according to the proportion of the weight values in the total weight, scheduling message analysis is carried out on the multiple VMs, and a target VM is selected from the multiple VMs; and analyzing the request message sent by the target VM, and analyzing and recording the targeted feedback to the target VM, so that intelligent scheduling and targeted feedback of the request message analysis can be realized.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow diagram of a scheduling method according to an embodiment of the present disclosure;
FIG. 2 is a system framework diagram of a scheduling method applying an embodiment of the present disclosure;
FIG. 3 is a system framework diagram of a scheduling method applying an embodiment of the present disclosure;
FIG. 4 is a flowchart of an application example to which the scheduling method of the disclosed embodiments is applied;
fig. 5 is a schematic structural diagram of a scheduling apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a scheduling method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The term "at least one" herein means any combination of at least two of any one or more of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C. The terms "first" and "second" used herein refer to and distinguish one from another in the similar art, without necessarily implying a sequence or order, or implying only two, such as first and second, to indicate that there are two types/two, first and second, and first and second may also be one or more.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
According to an embodiment of the present disclosure, a scheduling method is provided, and fig. 1 is a flowchart of the scheduling method according to the embodiment of the present disclosure, and the method may be applied to a scheduling apparatus, for example, the apparatus may be deployed in a case where a server or other processing device executes, and may perform configuration of a domain name resolution rule, scheduling of request message for a specific resolution, and the like. In some possible implementations, the method may also be implemented by a processor calling computer readable instructions stored in a memory. As shown in fig. 1, includes:
s101, obtaining a weight value, wherein the weight value is used for representing address scheduling priority information.
S102, receiving request messages sent by a plurality of VMs deployed in the VPC.
S103, according to the proportion of the weight values in the total weight, message analysis scheduling is carried out on the VMs, and a target VM is selected from the VMs.
And S104, analyzing the request message sent by the target VM, and feeding back, analyzing and recording the request message to the target VM in a targeted manner.
In one example of S101-S104, the processing logic of S101-S103 is implemented in a DNS server, namely: configuring a weight value in the DNS, wherein the weight value is one of configuration information allowing message domain name resolution, and address scheduling priority information can be represented by the weight value; after receiving DNS request messages sent by a plurality of VMs deployed in a VPC (virtual private network), a DNS server schedules message analysis on the VMs according to the proportion of the weight values in the total weight, selects a target VM from the VMs, analyzes the DNS request message sent by the target VM, and feeds back analysis records to the target VM in a targeted manner.
By adopting the method and the device, the weight value can be obtained, and the weight value is used for representing address scheduling priority information; receiving request messages sent by a plurality of VMs deployed in a VPC; according to the proportion of the weight values in the total weight, scheduling message analysis is carried out on the multiple VMs, and a target VM is selected from the multiple VMs; and analyzing the request message sent by the target VM, and performing targeted feedback analysis and recording to the target VM. Because a weight value (namely address scheduling priority information represented by the weight value) exists in the configuration of the DNS server, the source IP address in the DNS request and the ratio of the weight value configured in advance for each address to the total weight value (for example, the ratio is calculated by adopting a random algorithm) are subjected to analysis processing scheduling, and corresponding analysis records are fed back to the corresponding VM in a targeted manner, so that the intelligent scheduling and the targeted feedback of the request message analysis are realized.
The system architecture for network communication between the VPC and the DNS server is introduced as follows:
the isolation of the resources from each other can be achieved by the VPC, i.e.: the isolation of resources such as security groups, subnets and the like is realized among different VPCs, the isolation of DNS is also realized, and DNS analysis domains configured in the VPCs do not influence other VPCs. Because a plurality of resolution records can be configured in the DNS server, the DNS server receives a DNS request message sent by a VM deployed in a VPC, when performing DNS analysis on the DNS request message, all the hit analysis records are returned, and different priorities can be set for the resolutions of different VMs, the resolving method does not distinguish the difference of the priorities of the corresponding addresses of the VMs, especially in the application scene of the same user service deployed by starting a plurality of VMs by the same VPC, the resolving scheduling method has low efficiency and brings inconvenience to the user service, fig. 2 is a schematic diagram of a system framework applying the scheduling method of the embodiment of the present disclosure, as shown in fig. 2, 2 subnets (subnet 1 and subnet 2) are deployed in the same VPC (e.g. VPC-1), multiple VMs (e.g., VM1 and VM2) may be deployed on one or more physical machines in subnet 1, and one VM (e.g., VM3) may be deployed on one physical machine in subnet 2. Since multiple VMs (e.g., VM1, VM2, and VM3) are configured in the same VPC (e.g., VPC-1) and the same user service can be deployed on the multiple VMs, the DNS server is configured to: binding with the VPC, and configuring in advance for the user service: the DNS server does not differentiate the priorities of addresses corresponding to VMs when resolving the same-name and same-type a records (such as mirrorxxx. com a) of IP addresses (corresponding to respective addresses of multiple VMs) with the same name, and thus, the VM1, VM2, and VM3 obtain the same 3 resolution records, as shown in fig. 2, that is, in the VPC, the scheduling of domain name intelligent resolution cannot be implemented.
In one embodiment, the weight values include: and the weighted values are configured correspondingly to the address field information or the address information which the multiple VMs respectively belong to. With the present embodiment, according to the weight values, the importance degree of parsing and scheduling the plurality of VMs can be evaluated.
In an embodiment, the scheduling, according to the ratio of the weight values to the total weight, of the packet parsing on the multiple VMs, and selecting a target VM from the multiple VMs includes: taking a random value for the proportion of the weight value in the total weight sum; and scheduling the message analysis to the VMs according to the random value, and selecting a target VM from the VMs. By adopting the embodiment, the scheduling decision is carried out by adopting the random value with fairness, so that the VMs are treated more fairly and the load is more balanced.
In an embodiment, the scheduling, according to the random value, of message parsing on the VMs and selecting a target VM from the VMs includes: acquiring a target weight interval to which the random value corresponding to the target VM belongs; determining the target VM if the target weight interval corresponds to a preconfigured weight interval. By adopting the embodiment, the random value is matched with the pre-configured weight interval so as to decide how to schedule the plurality of VMs in a targeted manner, and the target VM can be selected from the plurality of VMs.
In one embodiment, the parsing the request packet sent by the target VM and performing a targeted feedback parsing record to the target VM includes: selecting a target analysis record corresponding to the target VM from at least two preset analysis records; and feeding back the target analysis record to the target VM. By adopting the embodiment, after the target VM is selected from the multiple VMs, the target analysis record corresponding to the target VM can be selected from at least two preset analysis records, so that the target analysis record is fed back to the target VM, therefore, different analysis records corresponding to different target VMs can be respectively obtained from the multiple VMs, and the targeted feedback of the analysis records is realized.
In an example, fig. 3 is a system framework diagram applying the scheduling method of the embodiment of the disclosure, and compared to the system framework diagram in fig. 2, the scheduling method running in the system framework shown in fig. 3 can distinguish between VMs, especially in an application scenario where the same VPC starts the same user service deployed by multiple VMs, and using this resolved scheduling method greatly improves efficiency and brings more convenience to the user service, as shown in fig. 3, 2 subnets (subnet 1 and subnet 2) are deployed in the same VPC (e.g., VPC-1), multiple VMs (e.g., VM1 and VM2) can be deployed on one or more physical machines in subnet 1, and one VM (e.g., VM3) can be deployed on one physical machine in subnet 2. Since multiple VMs (e.g., VM1, VM2, and VM3) are configured in the same VPC (e.g., VPC-1) and the same user service can be deployed on the multiple VMs, the DNS server is configured to: binding with the VPC, and configuring in advance for the user service: the different analysis records of the same type (such as mirrorxxx. com a) with the same name and different resolution IP addresses (corresponding to respective addresses of a plurality of VMs) are obtained, and when the DNS server analyzes the a records of the same name and the same type for a plurality of VMs deployed in the VPC, since the VMs can be distinguished from each other, by adopting the above-described embodiment and implementation, the VMs 1, VM2, and VM3 can obtain different analysis records corresponding to different VMs respectively after selecting the VMs from the plurality of VMs according to the address priority represented by the weight value by intelligent scheduling of domain name resolution. Among the respective analysis records, "mirrorxxx.com a192.168.1.12weight (5)", "mirrorxxx.com a 192.168.1.13weight (3)", "mirrorxxxx.com a 192.168.1.14weight (2)", "weight (5)", weight (3), weight (2) "are weight values configured for VM1, VM2, VM3, respectively, and thus address scheduling priority information expressed by the weight values is realized. Combining the three records into one record, the total weight is 10, taking a random value according to the occupation ratio of different weight values in the total weight, and looking at the weight interval in which the random value falls to decide how to schedule the multiple VMs (VM1, VM2, VM3) in a targeted manner, so as to feed back the analysis record in a targeted manner to the matched VMs, and an ACL attribute (namely address related information represented by the ACL) can be set in the analysis record, for example, the ACL attribute is configured by adopting different partition granularities such as AZ granularity, subnet granularity, VM and the like. For example, the AZ-B can distinguish machine rooms to which different VMs belong respectively through AZ-A, so that the problem of cross-machine-room analysis is solved. By adopting the example, the random value can be taken according to the ratio of different weight values in the total weight sum so as to decide how to schedule the multiple VMs in a targeted manner, and after selecting a VM from the multiple VMs, different analysis records corresponding to the different VMs are respectively obtained. The address related information represented by the ACL can be further set arbitrarily by adopting different partition granularities according to the matching precision requirement, so that the intelligent scheduling of AZ granularity analysis, the intelligent scheduling of subnet granularity analysis and the intelligent scheduling of analysis according to VM (or called IP corresponding to VM) granularity are realized respectively according to different granularities.
In one embodiment, the method further comprises: and pre-configuring the address scheduling priority information represented by the weighted values according to at least one parameter of storage capacity, operation energy consumption, resource configuration and supporting network bandwidth corresponding to the multiple VMs respectively. Taking storage capacity as an example, in the process of randomly configuring the weight value according to the demand, if the user needs to be a task of desiring to get a large flow, a VM with large storage capacity can be configured to get the task of getting the large flow, and the VM is arranged into a first priority with a large weight, for example, the weight value is configured to be 5; and the next configuration of the weighted value 3 and the next configuration of the weighted value 2 are combined with the operation of a random value, so that not only is the scheduling priority of each VM considered, but also the fairness and the load balance of the resource proportion of each VM are considered.
Application example:
the processing flow of the embodiment of the present disclosure includes the following contents:
fig. 4 is a flowchart of an application example of a scheduling method to which an embodiment of the present disclosure is applied, including: analyzing the DNS request message, and extracting a source IP address from the DNS request message; searching DNS records to obtain an analysis record corresponding to the DNS request message; taking the weight values corresponding to all the analysis records, and obtaining a weight sum according to the weight values; taking a random value smaller than the sum of the weights; taking a weight interval in which the random value is positioned to carry out analytic scheduling decision on the plurality of VMs; and returning the analysis record to the corresponding VM.
In an example shown in fig. 3, the subnet in the VPC is configured to have an AZ attribute, and the same subnet may not span AZ, that is, AZ may be mapped to a network segment of the subnet. When creating a DNS record, address scheduling priority information corresponding to an IP address or an address field which allows the record to be analyzed can be configured at the same time, and the address scheduling priority information is represented by a weighted value, so that after receiving a request message to be analyzed sent by a plurality of VMs, a DNS server extracts a source IP address of the request message, searches for a record to be analyzed by the analysis message, selects a VM from the plurality of VMs according to the address priority represented by the weighted value, and then obtains different analysis records corresponding to different VMs respectively. Among the respective analysis records "mirrorxxx. com a192.168.1.12weight (5)", "mirrorxxxx. com a 192.168.1.13weight (3)", "mirrorxxxx. com a 192.168.1.14weight (2)", "weight (5), weight (3), weight (2)" are weight values configured for VM1, VM2, VM3, respectively, whereby address scheduling priority information expressed by the weight values is realized. In the analytic scheduling process, as shown in fig. 3, three records may be combined into one record, the sum of weights of the three records is 5+3+2 to 10, a random value less than 10 is taken from the sum of the weights each time, so as to decide how to schedule a plurality of VMs (VM1, VM2, VM3) in a targeted manner, and in which weight interval the random value falls, which corresponding record is returned to the corresponding VM, so that the analytic record is fed back to the corresponding VM in a targeted manner.
For the weight interval, the random value is in the [0, 5] interval, and the corresponding analysis record is returned to 192.168.1.12 to be sent to the corresponding VM; the random value is in the interval of [5, 8], and the corresponding analysis record of 192.168.1.13 is returned to the corresponding VM; the random value is in the interval of [8, 10], and the corresponding analysis record is returned to 192.168.1.14 to be sent to the corresponding VM. Rather than returning all three records as shown in fig. 2, i.e., different VMs all return the same resolved record.
Usually, a VPC (commonly called big pool) is created in the network, and the VPC is given an address field, such as 0/16; then, a subnet is created in the address field, and after a segment, such as 2.0/24 (commonly called a small pool), is divided into the subnets, a VM is created. The analysis record in DNS is bound with VPC, and VPC has a plurality of sub-network segments and attributes, for example, AZ attribute can also be used for sub-network segments and the like, and AZ attribute is bound with the sub-network; to determine which machine roomA certain subnet belongs to, for example, subnet 1 in fig. 3 corresponds to AZ-A; subnet 2 corresponds to AZ-B. The same service can be deployed in different rooms, namely: the subnet 1 corresponding to the AZ-A can restrict the DNS server to only analyze the service of the AZ-A, and does not analyze the AZ-B (if the AZ-B is analyzed at the same time, the subnet is across the machine room); and the subnet 2 corresponding to the AZ-B can restrict the DNS server to only analyze the service of the AZ-B, so that cross-machine room execution is avoided after restriction.
Through the application example, besides the configuration of the weighted value, ACL configuration can be further carried out according to user requirements, the ACL can be configured according to different granularities, even if different VMs deploy the same service, scheduling can be carried out according to address scheduling priority, different analysis records can be obtained in a targeted manner, and finally the efficiency of the whole VPC is greatly improved.
By adopting the method and the device, in the analysis scene of a plurality of records with the same NAME, the weight value can be set for each record, when DNS analysis is carried out at each time, the random value is taken from the total sum of the weight values, and the random value falls into which record interval, so that which record is returned, and the intelligent scheduling is realized.
According to an embodiment of the present disclosure, a scheduling apparatus is provided, fig. 5 is a schematic structural diagram of a scheduling apparatus according to an embodiment of the present disclosure, and as shown in fig. 5, the scheduling apparatus 500 includes: an obtaining module 501, configured to obtain a weight value, where the weight value is used to represent address scheduling priority information; a receiving module 502, configured to receive request packets sent by multiple VMs deployed in a VPC; a scheduling module 503, configured to perform scheduling of message parsing on the multiple VMs according to a ratio of the weight value to a total weight, and select a target VM from the multiple VMs; the feedback module 504 is configured to analyze the request packet sent by the target VM, and perform targeted feedback analysis and record to the target VM.
In one embodiment, the weight value includes: and the weighted values are configured correspondingly to the address field information or the address information which the multiple VMs respectively belong to.
In one embodiment, the scheduling module includes: the proportion operation submodule is used for taking a random value for the proportion of the weight value in the total weight sum; and the scheduling submodule is used for scheduling the message analysis on the VMs according to the random value and selecting a target VM from the VMs.
In one embodiment, the scheduling sub-module is configured to: acquiring a target weight interval to which the random value corresponding to the target VM belongs; determining the target VM if the target weight interval corresponds to a preconfigured weight interval.
In an embodiment, the feedback module is configured to select a target parsing record corresponding to the target VM from at least two preconfigured parsing records; and feeding back the target analysis record to the target VM.
In an embodiment, the apparatus further includes a weight configuration module, configured to pre-configure the weight value according to at least one parameter of storage capacity, operation energy consumption, resource configuration, and supported network bandwidth respectively corresponding to the plurality of VMs.
The functions of each module in each apparatus in the embodiments of the present disclosure may refer to the corresponding description in the above method, and are not described herein again.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
Fig. 6 is a block diagram of an electronic device for implementing a scheduling method of an embodiment of the present disclosure. The electronic device may be the aforementioned deployment device or proxy device. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 can also be stored. The calculation unit 601, the ROM602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as the scheduling method. For example, in some embodiments, the scheduling method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the scheduling method described above may be performed. Alternatively, in other embodiments, the calculation unit 601 may be configured to perform the scheduling method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. A method of scheduling, the method comprising:
acquiring a weight value, wherein the weight value is used for representing address scheduling priority information;
receiving request messages sent by a plurality of Virtual Machines (VM) deployed in a Virtual Private Cloud (VPC);
according to the proportion of the weight values in the total weight, scheduling message analysis is carried out on the multiple VMs, and a target VM is selected from the multiple VMs;
and analyzing the request message sent by the target VM, and performing targeted feedback analysis and recording to the target VM.
2. The method of claim 1, the weighting values comprising: and the weighted values are configured correspondingly to the address field information or the address information which the multiple VMs respectively belong to.
3. The method of claim 1, wherein the scheduling of the message parsing of the VMs according to the weight value in proportion to the sum of weights, and selecting a target VM from the VMs comprises:
taking a random value for the proportion of the weight value in the total weight sum;
and scheduling the message analysis to the VMs according to the random value, and selecting the target VM from the VMs.
4. The method of claim 3, wherein the scheduling the message parsing for the plurality of VMs according to the random value, and selecting the target VM from the plurality of VMs comprises:
acquiring a target weight interval to which the random value corresponding to the target VM belongs;
determining the target VM if the target weight interval corresponds to a preconfigured weight interval.
5. The method according to claim 3, wherein the parsing the request packet sent by the target VM and the targeted feedback parsing and recording the request packet to the target VM comprises:
selecting a target analysis record corresponding to the target VM from at least two preset analysis records;
and feeding back the target analysis record to the target VM.
6. The method of any of claims 1-5, further comprising:
and pre-configuring the weight values according to at least one parameter of storage capacity, operation energy consumption, resource configuration and supporting network bandwidth corresponding to the plurality of VMs respectively.
7. A scheduling apparatus, the apparatus comprising:
the system comprises an acquisition module, a judging module and a judging module, wherein the acquisition module is used for acquiring a weight value, and the weight value is used for representing address scheduling priority information;
the virtual private cloud VPC comprises a receiving module, a sending module and a receiving module, wherein the receiving module is used for receiving request messages sent by a plurality of Virtual Machines (VM) deployed in a Virtual Private Cloud (VPC);
the scheduling module is used for scheduling the message analysis of the VMs according to the proportion of the weight values in the total weight sum, and selecting a target VM from the VMs;
and the feedback module is used for analyzing the request message sent by the target VM and performing targeted feedback analysis and recording to the target VM.
8. The apparatus of claim 7, the weight values comprising: and the weighted values are configured correspondingly to the address field information or the address information which the multiple VMs respectively belong to.
9. The apparatus of claim 7, wherein the scheduling module comprises:
the proportion operation submodule is used for taking a random value for the proportion of the weight value in the total weight sum;
and the scheduling submodule is used for scheduling the message analysis on the VMs according to the random value and selecting the target VM from the VMs.
10. The apparatus of claim 9, wherein the scheduling sub-module is to:
acquiring a target weight interval to which the random value corresponding to the target VM belongs;
determining the target VM if the target weight interval corresponds to a preconfigured weight interval.
11. The apparatus of claim 9, wherein the feedback module is to:
selecting a target analysis record corresponding to the target VM from at least two preset analysis records;
and feeding back the target analysis record to the target VM.
12. The apparatus of any of claims 7-11, further comprising a weight configuration module to:
and pre-configuring the weight values according to at least one parameter of storage capacity, operation energy consumption, resource configuration and supporting network bandwidth corresponding to the plurality of VMs respectively.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-6.
15. A computer program product comprising computer instructions which, when executed by a processor, implement the method of any one of claims 1-6.
CN202110219367.2A 2021-02-26 2021-02-26 Scheduling method, scheduling device, electronic equipment and storage medium Active CN112968966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110219367.2A CN112968966B (en) 2021-02-26 2021-02-26 Scheduling method, scheduling device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110219367.2A CN112968966B (en) 2021-02-26 2021-02-26 Scheduling method, scheduling device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112968966A true CN112968966A (en) 2021-06-15
CN112968966B CN112968966B (en) 2023-05-02

Family

ID=76275892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110219367.2A Active CN112968966B (en) 2021-02-26 2021-02-26 Scheduling method, scheduling device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112968966B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103634227A (en) * 2012-08-20 2014-03-12 百度在线网络技术(北京)有限公司 A service traffic precision scheduling method based on a user quantity and an apparatus thereof
CN103634314A (en) * 2013-11-28 2014-03-12 杭州华三通信技术有限公司 Service access control method and device based on VSR (virtual service router)
US9237087B1 (en) * 2011-03-16 2016-01-12 Google Inc. Virtual machine name resolution
CN108259642A (en) * 2018-01-02 2018-07-06 上海陆家嘴国际金融资产交易市场股份有限公司 Public service virtual machine access method and device based on private clound
US10033691B1 (en) * 2016-08-24 2018-07-24 Amazon Technologies, Inc. Adaptive resolution of domain name requests in virtual private cloud network environments

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9237087B1 (en) * 2011-03-16 2016-01-12 Google Inc. Virtual machine name resolution
CN103634227A (en) * 2012-08-20 2014-03-12 百度在线网络技术(北京)有限公司 A service traffic precision scheduling method based on a user quantity and an apparatus thereof
CN103634314A (en) * 2013-11-28 2014-03-12 杭州华三通信技术有限公司 Service access control method and device based on VSR (virtual service router)
US10033691B1 (en) * 2016-08-24 2018-07-24 Amazon Technologies, Inc. Adaptive resolution of domain name requests in virtual private cloud network environments
CN108259642A (en) * 2018-01-02 2018-07-06 上海陆家嘴国际金融资产交易市场股份有限公司 Public service virtual machine access method and device based on private clound

Also Published As

Publication number Publication date
CN112968966B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
US11842208B2 (en) Virtual provisioning with implementation resource boundary awareness
US10067789B2 (en) Method and apparatus for scheduling concurrent task among service servers by using processing thread
US8316125B2 (en) Methods and systems for automated migration of cloud processes to external clouds
CA2811020C (en) Virtual resource cost tracking with dedicated implementation resources
US9912613B2 (en) Dynamic service orchestration within PaaS platforms
US20200364608A1 (en) Communicating in a federated learning environment
US10013662B2 (en) Virtual resource cost tracking with dedicated implementation resources
US9450783B2 (en) Abstracting cloud management
US9348632B2 (en) Data assignment and data scheduling for physical machines in a virtual machine environment
CN110166507A (en) More resource regulating methods and device
CN114095567A (en) Data access request processing method and device, computer equipment and medium
US9684579B1 (en) Test device selection using multi-pass scoring
Chavan et al. Clustered virtual machines for higher availability of resources with improved scalability in cloud computing
CN112910919B (en) Analysis method, analysis device, electronic device, and storage medium
CN112968966B (en) Scheduling method, scheduling device, electronic equipment and storage medium
US20230161634A1 (en) Mapping an application signature to designated cloud resources
CN112769629A (en) Bandwidth allocation method and device, electronic equipment and storage medium
CN113055199B (en) Gateway access method and device and gateway equipment
CN110347473B (en) Method and device for distributing virtual machines of virtualized network elements distributed across data centers
CN112583949A (en) VPC (virtual private network) public network access method and VPC equipment
US10545802B2 (en) Event loop optimization through event ordering
CN111741097B (en) Method for tenant to monopolize node, computer equipment and storage medium
Kumar et al. Interface aware scheduling of tasks on cloud
Singh et al. A load balancing analysis of cloud base application with different service broker policies
CN115801727A (en) Domain name resolution method, domain name resolution device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant