CN116578420A - Water affair intelligent connection equipment and control method thereof - Google Patents

Water affair intelligent connection equipment and control method thereof Download PDF

Info

Publication number
CN116578420A
CN116578420A CN202310620565.9A CN202310620565A CN116578420A CN 116578420 A CN116578420 A CN 116578420A CN 202310620565 A CN202310620565 A CN 202310620565A CN 116578420 A CN116578420 A CN 116578420A
Authority
CN
China
Prior art keywords
computing resource
feature
matrix
time sequence
vectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310620565.9A
Other languages
Chinese (zh)
Inventor
王银春
李佳萌
王玺铭
辛晓岩
陈诺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Water Data Intelligence Technology Co ltd
Original Assignee
Hangzhou Water Data Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Water Data Intelligence Technology Co ltd filed Critical Hangzhou Water Data Intelligence Technology Co ltd
Priority to CN202310620565.9A priority Critical patent/CN116578420A/en
Publication of CN116578420A publication Critical patent/CN116578420A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Biomedical Technology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A water affair intelligent joint device and a control method thereof acquire the residual computing resource quantity of each server in a distributed server cluster at a plurality of preset time points in a preset time period; and adopting an artificial intelligence technology based on deep learning, taking the distributed server cluster as an access point, and adaptively distributing computing resource bearing proportion according to the residual condition of real-time computing resources of each server in the distributed server cluster so as to more reasonably utilize the cooperativity among the servers and the specificity of each server.

Description

Water affair intelligent connection equipment and control method thereof
Technical Field
The application relates to the technical field of intelligent control, in particular to water affair intelligent connection equipment and a control method thereof.
Background
Because the water service industry has a large amount of data acquisition and integration application requirements, and the acquired data also exists in the industrial control service network, relatively high requirements on safety and interconnection are provided. The traditional data transmission structure is mainly realized by a mode of virtualization and a front-end processor, and the safety isolation is finished to a certain extent.
In order to ensure the stability of the service, a service interconnection port is reserved in the network to ensure the opening of the remote operation and maintenance channel. The safety of the terminal business is ensured by the application of the digital safety intelligent terminal for water business management, but the management means of unification and safety are also lacking.
Thus, a water business intelligence solution is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a water affair intelligent joint device and a control method thereof, wherein the water affair intelligent joint device acquires the residual computing resource quantity of each server in a distributed server cluster at a plurality of preset time points in a preset time period; and adopting an artificial intelligence technology based on deep learning, taking the distributed server cluster as an access point, and adaptively distributing computing resource bearing proportion according to the residual condition of real-time computing resources of each server in the distributed server cluster so as to more reasonably utilize the cooperativity among the servers and the specificity of each server.
In a first aspect, there is provided a water affair alliance apparatus comprising: the data acquisition module is used for acquiring the residual computing resource amounts of a plurality of preset time points of each server in the distributed server cluster in a preset time period; the vector arrangement module is used for arranging the residual computing resource amounts of a plurality of preset time points of each server in a preset time period into input vectors according to time dimensions respectively to obtain a plurality of computing resource time sequence input vectors; the time sequence feature extraction module is used for enabling the plurality of computing resource time sequence input vectors to respectively pass through a time sequence feature extractor comprising a first convolution layer and a second convolution layer so as to obtain a plurality of computing resource time sequence feature vectors; the topology matrix construction module is used for constructing a communication distance topology matrix among all servers in the distributed server cluster; the feature extraction module is used for enabling the communication distance topology matrix to pass through a convolutional neural network model serving as a feature extractor to obtain a communication topology feature matrix; the graph neural network module is used for enabling the plurality of computing resource time sequence feature vectors and the communication topology feature matrix to pass through a graph neural network model to obtain a topology global computing resource time sequence feature matrix; the optimization module is used for carrying out feature distribution integrity enhancement on the topological global computing resource time sequence feature matrix so as to obtain an optimized topological global computing resource time sequence feature matrix; the matrix product calculation module is used for calculating the matrix product between each computing resource time sequence feature vector in the computing resource time sequence feature vectors and the optimized topology global computing resource time sequence feature matrix by taking the computing resource time sequence feature vector as a query feature vector so as to obtain a plurality of classification feature vectors; the normalization processing module is used for enabling the plurality of classification feature vectors to pass through a classifier to obtain a plurality of probability values, and carrying out maximum value-based normalization processing on the plurality of probability values to obtain a plurality of normalization probability values; and the task amount allocation calculation module is used for allocating the calculation task amount to each server in the distributed server cluster by taking the plurality of normalized probability values as allocation proportions.
In the water service intelligent joint device, the first convolution layer and the second convolution layer respectively use one-dimensional convolution kernels with different scales.
In the water affair intelligent joint device, the time sequence feature extraction module comprises: a first scale feature extraction unit, configured to input the plurality of computing resource timing input vectors into a first convolution layer of the timing feature extractor to obtain a first scale computing resource feature vector, where the first convolution layer has a one-dimensional convolution kernel of a first scale; a second scale feature extraction unit configured to input the plurality of computing resource timing input vectors into a second convolution layer of the timing feature extractor to obtain a second scale computing resource feature vector, where the second convolution layer has a one-dimensional convolution kernel of a second scale, and the first scale is different from the second scale; and a multi-scale cascading unit, configured to cascade the first-scale computing resource feature vector and the second-scale computing resource feature vector to obtain the plurality of computing resource time sequence feature vectors.
In the water affair intelligent joint device, the feature extraction module is used for: and respectively carrying out convolution processing, pooling processing along a channel dimension and nonlinear activation processing on input data in forward transmission of layers by using each layer of the convolutional neural network model serving as the feature extractor, wherein the output of the last layer of the convolutional neural network model serving as the feature extractor is used as the communication topology feature matrix, and the input of the first layer of the convolutional neural network model serving as the feature extractor is used as the communication distance topology matrix.
In the water affair intelligent joint device, the optimizing module is used for: the integrity of the feature distribution of the topological global computing resource time sequence feature matrix is enhanced by the following optimization formula to obtain an optimized topological global computing resource time sequence feature matrix; wherein, the optimization formula is:wherein->Is the topological global computing resource time sequence feature matrix, < >>Is the optimized topology global computing resource timing feature matrix,is the transpose of the topological global computing resource timing feature matrix, and +.>Is the topological global computing resource timingDistance matrix consisting of distances between every two corresponding row feature vectors of the feature matrix +.>Is a transpose of the distance matrix, +.>Each row vector representing the topological global computing resource timing feature matrix,an exponential operation representing a matrix representing a natural exponential function value raised to a power by a characteristic value of each position in the matrix, ">And->Respectively representing dot-by-location multiplication and matrix addition.
In the water affair intelligent joint device, the normalization processing module is used for: processing the plurality of classification feature vectors using the classifier in a classification formula to obtain the plurality of probability values; wherein, the classification formula is: Wherein->Representing the plurality of classification feature vectors, +.>Weight matrix for full connection layer, +.>Representing the deflection vector of the fully connected layer.
In a second aspect, a water affair intelligent joint control method is provided, which includes: obtaining the residual computing resource amounts of a plurality of preset time points of each server in a distributed server cluster in a preset time period; arranging the residual computing resource amounts of a plurality of preset time points of each server in a preset time period into input vectors according to time dimensions respectively to obtain a plurality of computing resource time sequence input vectors; respectively passing the plurality of computing resource time sequence input vectors through a time sequence feature extractor comprising a first convolution layer and a second convolution layer to obtain a plurality of computing resource time sequence feature vectors; constructing a communication distance topology matrix among all servers in the distributed server cluster; the communication distance topology matrix is passed through a convolutional neural network model serving as a feature extractor to obtain a communication topology feature matrix; the time sequence feature vectors of the plurality of computing resources and the communication topological feature matrix are processed through a graph neural network model to obtain a topological global computing resource time sequence feature matrix; carrying out feature distribution integrity reinforcement on the topological global computing resource time sequence feature matrix to obtain an optimized topological global computing resource time sequence feature matrix; taking each computing resource time sequence feature vector in the computing resource time sequence feature vectors as a query feature vector, and calculating a matrix product between the query feature vector and the optimized topology global computing resource time sequence feature matrix to obtain a plurality of classification feature vectors; the classification feature vectors pass through a classifier to obtain a plurality of probability values, and the probability values are subjected to maximum value-based normalization processing to obtain a plurality of normalized probability values; and allocating the calculation task quantity to each server in the distributed server cluster by taking the plurality of normalized probability values as allocation proportion.
In the water service intelligent joint control method, the first convolution layer and the second convolution layer respectively use one-dimensional convolution kernels with different scales.
In the water affair intelligent joint control method, the step of respectively passing the plurality of computing resource time sequence input vectors through a time sequence feature extractor comprising a first convolution layer and a second convolution layer to obtain a plurality of computing resource time sequence feature vectors comprises the following steps: inputting the plurality of computing resource time sequence input vectors into a first convolution layer of the time sequence feature extractor to obtain a first scale computing resource feature vector, wherein the first convolution layer is provided with a one-dimensional convolution kernel of a first scale; inputting the plurality of computing resource timing input vectors into a second convolution layer of the timing feature extractor to obtain a second scale computing resource feature vector, wherein the second convolution layer has a one-dimensional convolution kernel of a second scale, the first scale being different from the second scale; and cascading the first scale computing resource feature vector and the second scale computing resource feature vector to obtain the plurality of computing resource time sequence feature vectors.
In the water affair intelligent joint control method, the communication distance topology matrix is passed through a convolutional neural network model as a feature extractor to obtain a communication topology feature matrix, which comprises the following steps: and respectively carrying out convolution processing, pooling processing along a channel dimension and nonlinear activation processing on input data in forward transmission of layers by using each layer of the convolutional neural network model serving as the feature extractor, wherein the output of the last layer of the convolutional neural network model serving as the feature extractor is used as the communication topology feature matrix, and the input of the first layer of the convolutional neural network model serving as the feature extractor is used as the communication distance topology matrix.
Compared with the prior art, the water affair intelligent joint equipment and the control method thereof provided by the application acquire the residual computing resource amounts of a plurality of preset time points of each server in the distributed server cluster in a preset time period; and adopting an artificial intelligence technology based on deep learning, taking the distributed server cluster as an access point, and adaptively distributing computing resource bearing proportion according to the residual condition of real-time computing resources of each server in the distributed server cluster so as to more reasonably utilize the cooperativity among the servers and the specificity of each server.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic architecture diagram of a cloud resource management design according to an embodiment of the present application.
FIG. 2 is a schematic diagram of a security management design according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a disaster recovery design according to an embodiment of the present application.
Fig. 4 is an application scenario diagram of a water affair intelligent joint device according to an embodiment of the present application.
Fig. 5 is a block diagram of a water service intelligent joint device according to an embodiment of the present application.
Fig. 6 is a block diagram of the timing feature extraction module in the water service intelligent joint device according to the embodiment of the application.
Fig. 7 is a flowchart of a water affair intelligent joint control method according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a system architecture of a water affair intelligent joint control method according to an embodiment of the application.
Detailed Description
The following description of the technical solutions according to the embodiments of the present application will be given with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Unless defined otherwise, all technical and scientific terms used in the embodiments of the application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
In describing embodiments of the present application, unless otherwise indicated and limited thereto, the term "connected" should be construed broadly, for example, it may be an electrical connection, or may be a communication between two elements, or may be a direct connection, or may be an indirect connection via an intermediate medium, and it will be understood by those skilled in the art that the specific meaning of the term may be interpreted according to circumstances.
It should be noted that, the term "first\second\third" related to the embodiment of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that embodiments of the application described herein may be practiced in sequences other than those illustrated or described herein.
Accordingly, in the technical scheme of the application, the water affair intelligent connection equipment is standardized based on an application system, and according to a distributed service model application architecture, the access safety, the control safety and the data safety of an industrial control network are taken as access points, and the network and the data safety system are deployed from a terminal to an access terminal on the industrial control network and are used as a super-large cluster for unified management, so that the dynamic allocation, adjustment and recovery of resources, unified management and control, deployment flexibility, operation and maintenance convenience, overall architecture redundancy capability, disaster recovery capability and other aspects can be directly enhanced and perfected. The main architecture of the water affair intelligent connection equipment is shown in the following figure 1, the construction of a self-healing architecture is completed based on the design of self operation and maintenance, and the overall high availability of the system is realized; the physical resources are converted into logical resources by constructing a data center defined by software, and the supply speed of IT resources is accelerated and the service level is improved by means of API, template construction and the like; accessing, via a secure network, a configurable computing resource sharing pool of a data center; and through disaster recovery design, reliable use guarantee of resources is completed.
And the cloud resource management design mainly comprises a headquarter private cloud and a site edge cloud. A set of private cloud platform is kept physically shared between the headquarter and the branch company, the independence of the branch company for managing own business is logically reserved, and the private cloud platform is mainly used for providing comprehensive office business, tent business, security video monitoring business, industrial control business and the like required by each tenant for the branch company. First, in terms of resource storage, management is mainly performed through a standard SMIS protocol, and meanwhile, command line and script compiling are used in an auxiliary mode. In cloud resource call management, because company virtualization is mainly applied based on vmware at present, modules are independently developed through a private protocol, meanwhile, application adaptation is performed on virtual machines which are applied based on KVM (kernel mode) for a small part, and a core module nova in OpenStack is combined to be used for application adaptation, and finally, the virtual machines are packaged into service release for front-end users. The edge cloud is mainly deployed and applied to 4 cores with lower configuration and computing nodes or industrial personal computers with more than 8GB, and is used for running a lightweight management analysis system, an industrial personal computer front end processor and network isolation NGFW.
The security management design relates to various layers such as system and network security, application security, data security, content security, identity authentication, security management, product integration with a third party security resource pool and the like, and meets diversified security requirements. By deploying the special virtual network, a virtual out-of-band management mode is provided for the software protection system, the service port is not a management break, the intra-private network closed loop on operation and maintenance management is realized, and the functions of remote monitoring, online capacity expansion, master-slave synchronization, high availability, automatic backup and the like of the different network safety are realized. The method is mainly characterized in that the method is realized through SDN, the support of a two-layer tunnel encapsulation protocol of a VxLAN is matched with the automatic configuration deployment of the SDN network by a controller, the isolation, access route management and the like of a service network are mainly finished, the cluster management is carried out, and then the cluster management is carried out with the application through an interface. The network is deployed according to VLAN mode, different services under each user are divided into different VLANs, the head office user or the special network security user can see and associate all VLAN segment network cards, and other branch office users can only see the VLAN belonging to themselves. The architecture of security is shown in fig. 2.
Disaster recovery design, wherein disaster recovery is realized through technologies such as system redundancy, disaster detection, system migration and the like, and is mainly divided into three layers for design, wherein the first layer is double-active, and a high-availability architecture is adopted to realize high availability of Global service; and the synchronization of the MetaData information is ensured through database synchronization so as to realize the stable mapping relation of the configuration information of the business system in the main and standby sites. The second layer is a storage disaster backup, the I/O of the front-end application is written into a storage volume of the main cluster through a storage strategy, the main cluster simultaneously writes the I/O into the backup cluster, and the main cluster returns 'write completion' information to the front end after the backup cluster confirms. The third layer is resource arrangement, through which the service system automatically constructs disaster backup service system in the sand river disaster backup Zone, the bottom data is based on the remote data synchronization of the NeonSAN, and asynchronous disaster backup is realized between the main station and the backup station. The overall implementation architecture is shown in fig. 3.
The self-operation and maintenance design adopts a global P2P architecture to provide high availability of the system global, and the self-healing capacity of the system global is formed through self-operation and maintenance. Firstly, a platform automatically discovers and eliminates a server in a failure state, so that the computing resource, the load resource and the like requested by a user at any time are ensured to be established on the available server; meanwhile, a distributed file system is used for constructing a unified storage pool, functions of real-time remote multiple copies, hard disk snapshots and the like are provided, a virtual load balancer is provided, a failed application server is isolated, a high-availability and load-balanced running environment is achieved, hardware faults are harmless, and service continuity is guaranteed. On the other hand, the platform is also internally provided with a scripting tool, operation and maintenance personnel can automatically repair simple faults and remind hidden dangers, meanwhile, the platform automatically compares information such as resource use history conditions, alarm distribution conditions and the like by integrating event information, comprehensively analyzes the dependency relationship between applications and components, between applications and virtual networks, between applications and physical resources and the like, locks the fault point of the platform, and further improves self-operation and maintenance capability.
Particularly, in the technical scheme of the application, the platform can automatically find out the server in the failure state and reject the server, so that the computing resources, load resources and the like requested by the user at any time are ensured to be established on the available server. However, an important technical problem is how to allocate the amount of computing resources requested by the user, i.e. how to determine the proportion of computing resource allocation assumed by each of the available servers. It should be understood that the distributed server cluster is an organic whole, and the servers in the distributed server cluster are not completely independent of each other, so in determining how to allocate the amount of computing resources requested by the user, it is necessary to adaptively allocate the computing resource allocation proportion by using the distributed server cluster as an access point and by using the remaining real-time computing resources of the servers in the distributed server cluster, so as to more reasonably utilize the cooperativity among the servers and the specificity of the servers themselves.
Specifically, in the technical scheme of the application, the residual computing resource amounts of a plurality of preset time points of each server in the distributed server cluster in a preset time period are firstly obtained. Here, the obtaining of the remaining computing resource amounts of each server in the distributed server cluster at a plurality of predetermined time points in a predetermined time period is to know the current load condition of each server, so that the computing resource bearing proportion can be more reasonably distributed, and the computing resource utilization efficiency of the distributed server cluster is improved.
And then, arranging the residual computing resource amounts of a plurality of preset time points of each server in a preset time period into input vectors according to a time dimension to obtain a plurality of computing resource time sequence input vectors, and respectively passing the plurality of computing resource time sequence input vectors through a time sequence feature extractor comprising a first convolution layer and a second convolution layer to obtain a plurality of computing resource time sequence feature vectors. The method comprises the steps of arranging the residual computing resource quantities of a plurality of preset time points of each server in a preset time period into a plurality of computing resource time sequence input vectors according to a time dimension, and extracting the plurality of computing resource time sequence input vectors through a time sequence feature extractor to obtain time sequence features of time variation of computing resource utilization conditions of each server, so that real-time states and historical states of each server in a distributed server cluster can be effectively abstracted and expressed, and further, richer and more accurate feature information is extracted, and therefore the synergy and specificity among each server in the distributed server cluster are better described.
Meanwhile, constructing a communication distance topology matrix among all servers in the distributed server cluster. As noted above, for a cluster of distributed servers, the servers are not completely independent of each other, and they need to cooperate to accomplish tasks by communication. Therefore, the residual condition of the real-time computing resources of each server and the communication distance between the servers are considered, and the computing task quantity which each server should bear can be more accurately determined, so that the purpose of optimizing the utilization of the computing resources is achieved.
Specifically, in this step, a topology matrix of communication distances between the servers in the distributed server cluster is constructed in order to describe the communication relationships and distances between the servers in the distributed server cluster. In the process of computing resource allocation, not only real-time states and change trends of all servers but also communication relations among all servers need to be considered, so that the collaboration among all servers in the distributed server cluster is better utilized. Therefore, by constructing the communication distance topology matrix, the communication relation among different servers can be reflected, and the overall structure and characteristics of the distributed server cluster are further reflected.
And then, the communication distance topology matrix is passed through a convolutional neural network model serving as a feature extractor to obtain a communication topology feature matrix. Here, in the technical solution of the present application, the value of each position on the off-diagonal position in the topology matrix of communication distances is used to represent the communication distance between the corresponding two servers. And, performing a convolution kernel-based feature extractor on the communication distance topology matrix using a convolution neural network model having excellent performance in the field of local correlation feature extraction to capture correlation pattern features between communication distances between the respective servers. Those skilled in the art will appreciate that convolutional neural networks have the advantages of multiple levels of abstraction, shared parameters, translational invariance, and the like, and are widely used in the field of image processing. The convolutional neural network model is applied to the processing process of the communication distance topology matrix, so that the network topology relation among different servers can be effectively extracted, and the communication intensity and frequency among the servers are reflected.
And then, the plurality of computing resource time sequence feature vectors and the communication topology feature matrix are processed through a graph neural network model to obtain a topology global computing resource time sequence feature matrix. The time sequence feature vectors of the plurality of computing resources and the communication topological feature matrix are processed through a graph neural network model to obtain the time sequence feature matrix of the topological global computing resources, so that the computing resource features and the communication distance topological features of all servers in the distributed server cluster are comprehensively considered, and the integrity and the synergy of the distributed server cluster are reflected.
In the process of computing resource allocation, the real-time state and the change trend of the computing resources of each server and the communication relationship among the servers need to be considered simultaneously. Therefore, the graph neural network model is used for processing the time sequence feature vectors of the plurality of computing resources and the communication topology feature matrix, so that the cooperative characteristic among the servers in the distributed server cluster can be better described, and the accuracy of computing resource allocation is improved.
Those of ordinary skill in the art will appreciate that the graph neural network model is capable of efficiently learning the feature information of nodes and edges in the graph structure and extracting useful features therefrom to provide support for subsequent computing task assignments. The multiple computing resource time sequence feature vectors and the communication topology feature matrix are combined, a graph reflecting the overall state of the distributed server cluster can be established, and then the graph neural network model is utilized to process and analyze the graph, so that a more comprehensive and accurate topology global computing resource time sequence feature matrix is obtained.
And then, taking each computing resource time sequence feature vector in the computing resource time sequence feature vectors as a query feature vector, and calculating a matrix product between the query feature vector and the topological global computing resource time sequence feature matrix to obtain a plurality of classification feature vectors. That is, the computing resource features of the servers are used as query feature vectors, and the matrix product between the query feature vectors and the topological global computing resource time sequence feature matrix is calculated, so that the computing resource features of the servers are mapped into the high-dimensional feature space of the topological global computing resource time sequence feature matrix to obtain the plurality of classification feature vectors.
And then, the plurality of classification feature vectors pass through a classifier to obtain a plurality of probability values, and the plurality of probability values are subjected to normalization processing based on a maximum value to obtain a plurality of normalized probability values. In particular, in the technical scheme of the application, the plurality of normalized probability values are used as allocation proportion to allocate the calculation task quantity for each server in the distributed server cluster. In this way, when determining how to allocate the amount of computing resources requested by the user, the computing resource share ratio is adaptively allocated with the distributed server cluster as an access point and with the remaining real-time computing resources of each server in the distributed server cluster, so as to more reasonably utilize the cooperativity among each server and the specificity of each server.
In particular, in the technical scheme of the application, when the global computing resource time sequence feature matrix and the communication distance topological matrix are processed through a graph neural network model to obtain a topological global computing resource time sequence feature matrix, the graph neural network model performs feature fusion on the global computing resource time sequence feature matrix and the communication distance topological matrix through a learnable neural network parameter to obtain the topological global computing resource time sequence feature matrix containing irregular communication distance topological features and residual computing resource quantity time sequence features, wherein each row vector in the topological global computing resource time sequence feature matrix is used for representing the residual computing resource time sequence features of the corresponding server, which are fused with the communication topological features between other servers. However, in essence, each row vector in the topological global computing resource time sequence feature matrix is simply two-dimensional arranged to perform data aggregation, so if the integrity of the feature distribution of the topological global computing resource time sequence feature matrix can be further improved, the certainty of feature expression of the topological global computing resource time sequence feature matrix can be improved, and when each computing resource time sequence feature vector in the plurality of computing resource time sequence feature vectors is taken as a query feature vector, the matrix product between the computing resource time sequence feature vector and the topological global computing resource time sequence feature matrix is calculated to obtain a plurality of classification feature vectors, and the certainty and the structure of the feature distribution of the classification feature vectors can be enhanced to improve the classification accuracy of the classification feature vectors.
Based on this, the applicant of the present application calculates the resource timing feature matrix of the topology global, for example, asVector spectral clustering agent learning fusion optimization is performed to obtain an optimized topological global computing resource time sequence feature matrix, for example, marked as +.>,/>The concrete steps are as follows: />Wherein->Representing the topological global computing resource time sequence feature matrix +.>Is equal to or greater than the corresponding row vector>Is a distance matrix of distances between corresponding feature vectors.
Here, as the internal quasi regression semantic features of each row vector of the topological global computing resource time sequence feature matrix are mixed with the synthesized noise features, the ambiguity of the demarcation between the meaningful quasi regression semantic features and the noise features is caused, and the vector spectral clustering agent learning fusion optimization utilizes the conceptual information of the association between the quasi regression semantic features and the quasi regression scene by introducing the spectral clustering agent learning for representing the spatial layout and the semantic similarity between the vectors to perform the hidden supervision propagation on the potential association attribute between each row vector of the topological global computing resource time sequence feature matrix, thereby improving the overall distribution dependence of the topological global computing resource time sequence feature matrix as the synthesized features, and improving the classification effect of the classification feature vector for classification regression through the classifier.
Fig. 4 is an application scenario diagram of a water affair intelligent joint device according to an embodiment of the present application. As shown in fig. 4, in this application scenario, first, the remaining amount of computing resources (e.g., C as illustrated in fig. 4) at a plurality of predetermined points in time for each server in the distributed server cluster is acquired; then, inputting the obtained remaining amount of computing resources into a server (e.g., S as illustrated in fig. 4) deployed with a water-borne intelligent algorithm, wherein the server is capable of processing the remaining amount of computing resources based on the water-borne intelligent algorithm to pass the plurality of classification feature vectors through a classifier to obtain a plurality of probability values, and performing a maximum-value-based normalization process on the plurality of probability values to obtain a plurality of normalized probability values; and allocating the calculation task quantity to each server in the distributed server cluster by taking the plurality of normalized probability values as allocation proportion.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
In one embodiment of the present application, fig. 5 is a block diagram of a water service intelligent joint device according to an embodiment of the present application. As shown in fig. 5, the water affair intelligent joint device 100 according to the embodiment of the present application includes: a data acquisition module 101, configured to acquire remaining amounts of computing resources at a plurality of predetermined time points in a predetermined time period for each server in the distributed server cluster; a vector arrangement module 102, configured to arrange the remaining computing resource amounts of the respective servers at a plurality of predetermined time points in a predetermined time period as input vectors according to a time dimension, so as to obtain a plurality of computing resource time sequence input vectors; a timing feature extraction module 103, configured to pass the plurality of computing resource timing input vectors through a timing feature extractor including a first convolution layer and a second convolution layer, respectively, to obtain a plurality of computing resource timing feature vectors; a topology matrix construction module 104, configured to construct a topology matrix of communication distances between servers in the distributed server cluster; a feature extraction module 105, configured to pass the communication distance topology matrix through a convolutional neural network model serving as a feature extractor to obtain a communication topology feature matrix; the graph neural network module 106 is configured to pass the plurality of computing resource time sequence feature vectors and the communication topology feature matrix through a graph neural network model to obtain a topology global computing resource time sequence feature matrix; an optimization module 107, configured to perform feature distribution integrity enhancement on the topology global computing resource time sequence feature matrix to obtain an optimized topology global computing resource time sequence feature matrix; a matrix product calculation module 108, configured to calculate a matrix product between each computing resource timing feature vector of the plurality of computing resource timing feature vectors and the optimized topology global computing resource timing feature matrix to obtain a plurality of classification feature vectors; a normalization processing module 109, configured to pass the plurality of classification feature vectors through a classifier to obtain a plurality of probability values, and perform maximum value-based normalization processing on the plurality of probability values to obtain a plurality of normalized probability values; and a task amount allocation calculation module 110, configured to allocate a calculated task amount to each server in the distributed server cluster with the plurality of normalized probability values as allocation proportions.
Specifically, in the embodiment of the present application, the data obtaining module 101 is configured to obtain the amounts of remaining computing resources of each server in the distributed server cluster at a plurality of predetermined time points within a predetermined period of time. Particularly, in the technical scheme of the application, the platform can automatically find out the server in the failure state and reject the server, so that the computing resources, load resources and the like requested by the user at any time are ensured to be established on the available server. However, an important technical problem is how to allocate the amount of computing resources requested by the user, i.e. how to determine the proportion of computing resource allocation assumed by each of the available servers.
It should be understood that the distributed server cluster is an organic whole, and the servers in the distributed server cluster are not completely independent of each other, so in determining how to allocate the amount of computing resources requested by the user, it is necessary to adaptively allocate the computing resource allocation proportion by using the distributed server cluster as an access point and by using the remaining real-time computing resources of the servers in the distributed server cluster, so as to more reasonably utilize the cooperativity among the servers and the specificity of the servers themselves.
Specifically, in the technical scheme of the application, the residual computing resource amounts of a plurality of preset time points of each server in the distributed server cluster in a preset time period are firstly obtained. Here, the obtaining of the remaining computing resource amounts of each server in the distributed server cluster at a plurality of predetermined time points in a predetermined time period is to know the current load condition of each server, so that the computing resource bearing proportion can be more reasonably distributed, and the computing resource utilization efficiency of the distributed server cluster is improved.
Specifically, in the embodiment of the present application, the vector arrangement module 102 is configured to arrange the remaining computing resource amounts of the respective servers at a plurality of predetermined time points in a predetermined time period as input vectors according to a time dimension, so as to obtain a plurality of computing resource time sequence input vectors. And then, arranging the residual computing resource amounts of a plurality of preset time points of each server in a preset time period into input vectors according to a time dimension respectively to obtain a plurality of computing resource time sequence input vectors.
Specifically, in the embodiment of the present application, the timing feature extraction module 103 is configured to pass the plurality of computing resource timing input vectors through a timing feature extractor including a first convolution layer and a second convolution layer, so as to obtain a plurality of computing resource timing feature vectors. And respectively passing the plurality of computing resource time sequence input vectors through a time sequence feature extractor comprising a first convolution layer and a second convolution layer to obtain a plurality of computing resource time sequence feature vectors.
The method comprises the steps of arranging the residual computing resource quantities of a plurality of preset time points of each server in a preset time period into a plurality of computing resource time sequence input vectors according to a time dimension, and extracting the plurality of computing resource time sequence input vectors through a time sequence feature extractor to obtain time sequence features of time variation of computing resource utilization conditions of each server, so that real-time states and historical states of each server in a distributed server cluster can be effectively abstracted and expressed, and further, richer and more accurate feature information is extracted, and therefore the synergy and specificity among each server in the distributed server cluster are better described.
Wherein the first convolution layer and the second convolution layer each use one-dimensional convolution kernels having different dimensions.
Fig. 6 is a block diagram of the timing feature extraction module in the water service intelligent joint device according to the embodiment of the present application, as shown in fig. 6, the timing feature extraction module 103 includes: a first scale feature extraction unit 1031, configured to input the plurality of computing resource timing input vectors into a first convolution layer of the timing feature extractor to obtain a first scale computing resource feature vector, where the first convolution layer has a one-dimensional convolution kernel of a first scale; a second scale feature extraction unit 1032 for inputting the plurality of computing resource timing input vectors into a second convolution layer of the timing feature extractor to obtain a second scale computing resource feature vector, wherein the second convolution layer has a one-dimensional convolution kernel of a second scale, the first scale being different from the second scale; and a multi-scale cascade unit 1033, configured to cascade the first-scale computing resource feature vector and the second-scale computing resource feature vector to obtain the plurality of computing resource timing feature vectors.
It should be noted that the time series feature extractor is essentially a deep neural network model based on deep learning, which is capable of fitting any function by a predetermined training strategy and has a higher feature extraction generalization capability than the conventional feature engineering.
The time sequence feature extractor comprises a plurality of parallel one-dimensional convolution layers, wherein in the process of feature extraction of the time sequence feature extractor, the plurality of parallel one-dimensional convolution layers perform one-dimensional convolution coding on input data by one-dimensional convolution check with different scales so as to capture local implicit features of a sequence.
Specifically, in the embodiment of the present application, the topology matrix construction module 104 is configured to construct a topology matrix of communication distances between the servers in the distributed server cluster. Meanwhile, constructing a communication distance topology matrix among all servers in the distributed server cluster. As noted above, for a cluster of distributed servers, the servers are not completely independent of each other, and they need to cooperate to accomplish tasks by communication. Therefore, the residual condition of the real-time computing resources of each server and the communication distance between the servers are considered, and the computing task quantity which each server should bear can be more accurately determined, so that the purpose of optimizing the utilization of the computing resources is achieved.
Specifically, in this step, a topology matrix of communication distances between the servers in the distributed server cluster is constructed in order to describe the communication relationships and distances between the servers in the distributed server cluster. In the process of computing resource allocation, not only real-time states and change trends of all servers but also communication relations among all servers need to be considered, so that the collaboration among all servers in the distributed server cluster is better utilized. Therefore, by constructing the communication distance topology matrix, the communication relation among different servers can be reflected, and the overall structure and characteristics of the distributed server cluster are further reflected.
Specifically, in the embodiment of the present application, the feature extraction module 105 is configured to pass the communication distance topology matrix through a convolutional neural network model serving as a feature extractor to obtain a communication topology feature matrix. And then, the communication distance topology matrix is passed through a convolutional neural network model serving as a feature extractor to obtain a communication topology feature matrix. Here, in the technical solution of the present application, the value of each position on the off-diagonal position in the topology matrix of communication distances is used to represent the communication distance between the corresponding two servers.
And, performing a convolution kernel-based feature extractor on the communication distance topology matrix using a convolution neural network model having excellent performance in the field of local correlation feature extraction to capture correlation pattern features between communication distances between the respective servers. Those skilled in the art will appreciate that convolutional neural networks have the advantages of multiple levels of abstraction, shared parameters, translational invariance, and the like, and are widely used in the field of image processing. The convolutional neural network model is applied to the processing process of the communication distance topology matrix, so that the network topology relation among different servers can be effectively extracted, and the communication intensity and frequency among the servers are reflected.
Wherein, the feature extraction module 105 is configured to: and respectively carrying out convolution processing, pooling processing along a channel dimension and nonlinear activation processing on input data in forward transmission of layers by using each layer of the convolutional neural network model serving as the feature extractor, wherein the output of the last layer of the convolutional neural network model serving as the feature extractor is used as the communication topology feature matrix, and the input of the first layer of the convolutional neural network model serving as the feature extractor is used as the communication distance topology matrix.
The convolutional neural network (Convolutional Neural Network, CNN) is an artificial neural network and has wide application in the fields of image recognition and the like. The convolutional neural network may include an input layer, a hidden layer, and an output layer, where the hidden layer may include a convolutional layer, a pooling layer, an activation layer, a full connection layer, etc., where the previous layer performs a corresponding operation according to input data, outputs an operation result to the next layer, and obtains a final result after the input initial data is subjected to a multi-layer operation.
The convolutional neural network model has excellent performance in the aspect of image local feature extraction by taking a convolutional kernel as a feature filtering factor, and has stronger feature extraction generalization capability and fitting capability compared with the traditional image feature extraction algorithm based on statistics or feature engineering.
Specifically, in the embodiment of the present application, the graph neural network module 106 is configured to pass the plurality of computing resource timing feature vectors and the communication topology feature matrix through a graph neural network model to obtain a topology global computing resource timing feature matrix. And then, the plurality of computing resource time sequence feature vectors and the communication topology feature matrix are processed through a graph neural network model to obtain a topology global computing resource time sequence feature matrix. The time sequence feature vectors of the plurality of computing resources and the communication topological feature matrix are processed through a graph neural network model to obtain the time sequence feature matrix of the topological global computing resources, so that the computing resource features and the communication distance topological features of all servers in the distributed server cluster are comprehensively considered, and the integrity and the synergy of the distributed server cluster are reflected.
In the process of computing resource allocation, the real-time state and the change trend of the computing resources of each server and the communication relationship among the servers need to be considered simultaneously. Therefore, the graph neural network model is used for processing the time sequence feature vectors of the plurality of computing resources and the communication topology feature matrix, so that the cooperative characteristic among the servers in the distributed server cluster can be better described, and the accuracy of computing resource allocation is improved.
Those of ordinary skill in the art will appreciate that the graph neural network model is capable of efficiently learning the feature information of nodes and edges in the graph structure and extracting useful features therefrom to provide support for subsequent computing task assignments. The multiple computing resource time sequence feature vectors and the communication topology feature matrix are combined, a graph reflecting the overall state of the distributed server cluster can be established, and then the graph neural network model is utilized to process and analyze the graph, so that a more comprehensive and accurate topology global computing resource time sequence feature matrix is obtained.
Specifically, in the embodiment of the present application, the optimization module 107 is configured to perform feature distribution integrity enhancement on the topology global computing resource timing feature matrix to obtain an optimized topology global computing resource timing feature matrix. In particular, in the technical scheme of the application, when the plurality of computing resource time sequence feature vectors and the communication topology feature matrix are processed through a graph neural network model to obtain a topology global computing resource time sequence feature matrix, the graph neural network model processes feature fusion on the plurality of computing resource time sequence feature vectors and the communication topology feature matrix through a learnable neural network parameter to obtain the topology global computing resource time sequence feature matrix containing irregular communication distance topological features and residual computing resource quantity time sequence features, wherein each row vector in the topology global computing resource time sequence feature matrix is used for representing residual computing resource time sequence features of corresponding servers fused with communication topological features between other servers. However, in essence, each row vector in the topological global computing resource time sequence feature matrix is simply two-dimensional arranged to perform data aggregation, so if the integrity of the feature distribution of the topological global computing resource time sequence feature matrix can be further improved, the certainty of feature expression of the topological global computing resource time sequence feature matrix can be improved, and when each computing resource time sequence feature vector in the plurality of computing resource time sequence feature vectors is taken as a query feature vector, the matrix product between the computing resource time sequence feature vector and the topological global computing resource time sequence feature matrix is calculated to obtain a plurality of classification feature vectors, and the certainty and the structure of the feature distribution of the classification feature vectors can be enhanced to improve the classification accuracy of the classification feature vectors.
Based on this, the applicant of the present application calculates the resource timing feature matrix of the topology global, for example, asVector spectral clustering agent learning fusion optimization is performed to obtain an optimized topological global computing resource time sequence feature matrix, for example, the time sequence feature matrix is recorded as,/>The concrete steps are as follows: the integrity of the feature distribution of the topological global computing resource time sequence feature matrix is enhanced by the following optimization formula to obtain an optimized topological global computing resource time sequence feature matrix;wherein, the optimization formula is:wherein->Is the topological global computing resource time sequence feature matrix, < >>Is the global computing resource time sequence feature matrix of the optimized topology, < > in the following>Is the transpose of the topological global computing resource timing feature matrix, and +.>Is a distance matrix composed of the distances between every two corresponding row feature vectors of the topological global computing resource time sequence feature matrix,/for>Is a transpose of the distance matrix, +.>Each row vector representing the topological global computing resource timing feature matrix +.>An exponential operation representing a matrix representing a natural exponential function value raised to a power by a characteristic value of each position in the matrix, ">And- >Respectively representing dot-by-location multiplication and matrix addition.
Here, as the internal quasi regression semantic features of each row vector of the topological global computing resource time sequence feature matrix are mixed with the synthesized noise features, the ambiguity of the demarcation between the meaningful quasi regression semantic features and the noise features is caused, and the vector spectral clustering agent learning fusion optimization utilizes the conceptual information of the association between the quasi regression semantic features and the quasi regression scene by introducing the spectral clustering agent learning for representing the spatial layout and the semantic similarity between the vectors to perform the hidden supervision propagation on the potential association attribute between each row vector of the topological global computing resource time sequence feature matrix, thereby improving the overall distribution dependence of the topological global computing resource time sequence feature matrix as the synthesized features, and improving the classification effect of the classification feature vector for classification regression through the classifier.
Specifically, in the embodiment of the present application, the matrix product calculating module 108 is configured to calculate a matrix product between each computing resource timing feature vector of the plurality of computing resource timing feature vectors and the optimized topology global computing resource timing feature matrix to obtain a plurality of classification feature vectors. And then, taking each computing resource time sequence feature vector in the computing resource time sequence feature vectors as a query feature vector, and calculating a matrix product between the query feature vector and the topological global computing resource time sequence feature matrix to obtain a plurality of classification feature vectors. That is, the computing resource features of the servers are used as query feature vectors, and the matrix product between the query feature vectors and the topological global computing resource time sequence feature matrix is calculated, so that the computing resource features of the servers are mapped into the high-dimensional feature space of the topological global computing resource time sequence feature matrix to obtain the plurality of classification feature vectors.
Specifically, in the embodiment of the present application, the normalization processing module 109 and the task amount allocation calculation module 110 are configured to pass the plurality of classification feature vectors through a classifier to obtain a plurality of probability values, and perform a maximum value-based normalization process on the plurality of probability values to obtain a plurality of normalized probability values; and the distribution unit is used for distributing the calculation task quantity to each server in the distributed server cluster by taking the plurality of normalized probability values as distribution ratios.
And then, the plurality of classification feature vectors pass through a classifier to obtain a plurality of probability values, and the plurality of probability values are subjected to normalization processing based on a maximum value to obtain a plurality of normalized probability values. In particular, in the technical scheme of the application, the plurality of normalized probability values are used as allocation proportion to allocate the calculation task quantity for each server in the distributed server cluster. In this way, when determining how to allocate the amount of computing resources requested by the user, the computing resource share ratio is adaptively allocated with the distributed server cluster as an access point and with the remaining real-time computing resources of each server in the distributed server cluster, so as to more reasonably utilize the cooperativity among each server and the specificity of each server.
Wherein, the normalization processing module 109 is configured to: processing the plurality of classification feature vectors using the classifier in a classification formula to obtain the plurality of probability values; wherein, the classification formula is:wherein->Representing the plurality of classification feature vectors, +.>Weight matrix for full connection layer, +.>Representing the deflection vector of the fully connected layer.
In summary, the water service intelligent joint device 100 according to the embodiment of the present application is illustrated, which obtains the remaining computing resource amounts of a plurality of predetermined time points of each server in the distributed server cluster within a predetermined period of time; and adopting an artificial intelligence technology based on deep learning, taking the distributed server cluster as an access point, and adaptively distributing computing resource bearing proportion according to the residual condition of real-time computing resources of each server in the distributed server cluster so as to more reasonably utilize the cooperativity among the servers and the specificity of each server.
In one embodiment of the present application, fig. 7 is a flowchart of a water affair intelligent joint control method according to an embodiment of the present application. As shown in fig. 7, the water affair intelligent joint control method according to the embodiment of the application includes: 201, obtaining the residual computing resource amounts of a plurality of preset time points of each server in a distributed server cluster in a preset time period; 202, arranging the residual computing resource amounts of a plurality of preset time points of each server in a preset time period into input vectors according to a time dimension respectively to obtain a plurality of computing resource time sequence input vectors; 203, passing the plurality of computing resource time sequence input vectors through a time sequence feature extractor comprising a first convolution layer and a second convolution layer to obtain a plurality of computing resource time sequence feature vectors; 204, constructing a communication distance topology matrix among all servers in the distributed server cluster; 205, passing the communication distance topology matrix through a convolutional neural network model serving as a feature extractor to obtain a communication topology feature matrix; 206, passing the plurality of computing resource time sequence feature vectors and the communication topology feature matrix through a graph neural network model to obtain a topology global computing resource time sequence feature matrix; 207, carrying out feature distribution integrity enhancement on the topological global computing resource time sequence feature matrix to obtain an optimized topological global computing resource time sequence feature matrix; 208, calculating a matrix product between each computing resource time sequence feature vector in the computing resource time sequence feature vectors and the optimized topology global computing resource time sequence feature matrix to obtain a plurality of classification feature vectors by taking the computing resource time sequence feature vector as a query feature vector; 209, passing the plurality of classification feature vectors through a classifier to obtain a plurality of probability values, and performing maximum value-based normalization processing on the plurality of probability values to obtain a plurality of normalized probability values; and 210, allocating a calculation task amount to each server in the distributed server cluster by taking the plurality of normalized probability values as allocation proportions.
Fig. 8 is a schematic diagram of a system architecture of a water affair intelligent joint control method according to an embodiment of the application. As shown in fig. 8, in the system architecture of the water affair intelligent joint control method, first, the remaining computing resource amounts of a plurality of predetermined time points of each server in the distributed server cluster in a predetermined time period are obtained; then, arranging the residual computing resource amounts of a plurality of preset time points of each server in a preset time period into input vectors according to a time dimension respectively to obtain a plurality of computing resource time sequence input vectors; then, the plurality of computing resource time sequence input vectors respectively pass through a time sequence feature extractor comprising a first convolution layer and a second convolution layer to obtain a plurality of computing resource time sequence feature vectors; then constructing a communication distance topology matrix among all servers in the distributed server cluster; then, the communication distance topology matrix is passed through a convolutional neural network model serving as a feature extractor to obtain a communication topology feature matrix; then, the global computing resource time sequence feature matrix and the communication distance topology matrix pass through a graph neural network model to obtain a topology global computing resource time sequence feature matrix; then, carrying out feature distribution integrity enhancement on the topological global computing resource time sequence feature matrix to obtain an optimized topological global computing resource time sequence feature matrix; then, each computing resource time sequence feature vector in the computing resource time sequence feature vectors is used as a query feature vector, and a matrix product between the query feature vector and the optimized topological global computing resource time sequence feature matrix is calculated to obtain a plurality of classification feature vectors; then, the classification feature vectors pass through a classifier to obtain a plurality of probability values, and the probability values are subjected to normalization processing based on maximum values to obtain a plurality of normalized probability values; and finally, allocating the calculation task quantity to each server in the distributed server cluster by taking the plurality of normalized probability values as allocation proportions.
In a specific example, in the water service intelligent joint control method, the first convolution layer and the second convolution layer use one-dimensional convolution kernels with different scales, respectively.
In a specific example, in the water service intelligent joint control method, the step of passing the plurality of computing resource time sequence input vectors through a time sequence feature extractor including a first convolution layer and a second convolution layer to obtain a plurality of computing resource time sequence feature vectors includes: inputting the plurality of computing resource time sequence input vectors into a first convolution layer of the time sequence feature extractor to obtain a first scale computing resource feature vector, wherein the first convolution layer is provided with a one-dimensional convolution kernel of a first scale; inputting the plurality of computing resource timing input vectors into a second convolution layer of the timing feature extractor to obtain a second scale computing resource feature vector, wherein the second convolution layer has a one-dimensional convolution kernel of a second scale, the first scale being different from the second scale; and cascading the first scale computing resource feature vector and the second scale computing resource feature vector to obtain the plurality of computing resource time sequence feature vectors.
In a specific example, in the water service intelligent joint control method, the communication distance topology matrix is passed through a convolutional neural network model as a feature extractor to obtain a communication topology feature matrix, which includes: and respectively carrying out convolution processing, pooling processing along a channel dimension and nonlinear activation processing on input data in forward transmission of layers by using each layer of the convolutional neural network model serving as the feature extractor, wherein the output of the last layer of the convolutional neural network model serving as the feature extractor is used as the communication topology feature matrix, and the input of the first layer of the convolutional neural network model serving as the feature extractor is used as the communication distance topology matrix.
In a specific example, in the water affair intelligent joint control method, the enhancing the integrity of feature distribution of the topological global computing resource time sequence feature matrix to obtain an optimized topological global computing resource time sequence feature matrix includes: the integrity of the feature distribution of the topological global computing resource time sequence feature matrix is enhanced by the following optimization formula to obtain an optimized topological global computing resource time sequence feature matrix; wherein, the optimization formula is: Wherein->Is the topological global computing resource time sequence feature matrix, < >>Is the global computing resource time sequence feature matrix of the optimized topology, < > in the following>Is the transpose of the topological global computing resource timing feature matrix, and +.>Is a distance matrix composed of the distances between every two corresponding row feature vectors of the topological global computing resource time sequence feature matrix,/for>Is a transpose of the distance matrix, +.>Each row vector representing the topological global computing resource timing feature matrix +.>An exponential operation representing a matrix representing a natural exponential function value raised to a power by a characteristic value of each position in the matrix, ">And->Respectively representing dot-by-location multiplication and matrix addition.
In a specific example, in the water service intelligent joint control method, the step of passing the plurality of classification feature vectors through a classifier to obtain a plurality of probability values, and performing maximum value-based normalization processing on the plurality of probability values to obtain a plurality of normalized probability values includes: processing the plurality of classification feature vectors using the classifier in a classification formula to obtain the plurality of probability values; wherein, the classification formula is: Wherein->Representing the plurality of classification feature vectors, +.>Weight matrix for full connection layer, +.>Representing the deflection vector of the fully connected layer.
It will be appreciated by those skilled in the art that the specific operations of the respective steps in the above water service intelligent joint control method have been described in detail in the above description of the water service intelligent joint apparatus with reference to fig. 1 to 6, and thus, repetitive descriptions thereof will be omitted.
The present application also provides a computer program product comprising instructions which, when executed, cause an apparatus to perform operations corresponding to the above-described method.
In one embodiment of the present application, there is also provided a computer-readable storage medium storing a computer program for executing the above-described method.
It should be appreciated that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the forms of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects may be utilized. Furthermore, the computer program product may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Methods, systems, and computer program products of embodiments of the present application are described in the flow diagrams and/or block diagrams. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be considered as essential to the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in the present application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A water affair intelligence alliance device characterized by comprising: the data acquisition module is used for acquiring the residual computing resource amounts of a plurality of preset time points of each server in the distributed server cluster in a preset time period; the vector arrangement module is used for arranging the residual computing resource amounts of a plurality of preset time points of each server in a preset time period into input vectors according to time dimensions respectively to obtain a plurality of computing resource time sequence input vectors; the time sequence feature extraction module is used for enabling the plurality of computing resource time sequence input vectors to respectively pass through a time sequence feature extractor comprising a first convolution layer and a second convolution layer so as to obtain a plurality of computing resource time sequence feature vectors; the topology matrix construction module is used for constructing a communication distance topology matrix among all servers in the distributed server cluster; the feature extraction module is used for enabling the communication distance topology matrix to pass through a convolutional neural network model serving as a feature extractor to obtain a communication topology feature matrix; the graph neural network module is used for enabling the plurality of computing resource time sequence feature vectors and the communication topology feature matrix to pass through a graph neural network model to obtain a topology global computing resource time sequence feature matrix; the optimization module is used for carrying out feature distribution integrity enhancement on the topological global computing resource time sequence feature matrix so as to obtain an optimized topological global computing resource time sequence feature matrix; the matrix product calculation module is used for calculating the matrix product between each computing resource time sequence feature vector in the computing resource time sequence feature vectors and the optimized topology global computing resource time sequence feature matrix by taking the computing resource time sequence feature vector as a query feature vector so as to obtain a plurality of classification feature vectors; the normalization processing module is used for enabling the plurality of classification feature vectors to pass through a classifier to obtain a plurality of probability values, and carrying out maximum value-based normalization processing on the plurality of probability values to obtain a plurality of normalization probability values; and the task amount allocation calculation module is used for allocating the calculation task amount to each server in the distributed server cluster by taking the plurality of normalized probability values as allocation proportions.
2. The water service intelligent joint device according to claim 1, wherein the first convolution layer and the second convolution layer each use one-dimensional convolution kernels having different dimensions.
3. The water affair alliance device of claim 2, wherein the timing feature extraction module comprises: the device comprises a timing sequence feature extractor, a first scale feature extraction unit, a second scale feature extraction unit, a first scale feature extraction unit and a second scale feature extraction unit, wherein the timing sequence input vectors of the plurality of computing resources are input into a first convolution layer of the timing sequence feature extractor to obtain a first scale computing resource feature vector, the first convolution layer is provided with a one-dimensional convolution kernel of a first scale, and the second scale feature extraction unit is used for inputting the timing sequence input vectors of the plurality of computing resources into a second convolution layer of the timing sequence feature extractor to obtain a second scale computing resource feature vector, and the second convolution layer is provided with a one-dimensional convolution kernel of a second scale, and the first scale is different from the second scale; and the multi-scale cascading unit is used for cascading the first-scale computing resource feature vector and the second-scale computing resource feature vector to obtain the plurality of computing resource time sequence feature vectors.
4. A water affairs alliance device according to claim 3 wherein the feature extraction module is for: and respectively carrying out convolution processing, pooling processing along a channel dimension and nonlinear activation processing on input data in forward transmission of layers by using each layer of the convolutional neural network model serving as the feature extractor, wherein the output of the last layer of the convolutional neural network model serving as the feature extractor is used as the communication topology feature matrix, and the input of the first layer of the convolutional neural network model serving as the feature extractor is used as the communication distance topology matrix.
5. The water service intelligent joint device according to claim 4, wherein the optimizing module is configured to: the integrity of the feature distribution of the topological global computing resource time sequence feature matrix is enhanced by the following optimization formula to obtain an optimized topological global computing resource time sequence feature matrix; wherein, the optimization formula is:wherein->Is the topological global computing resource time sequence feature matrix, < >>Is the global computing resource time sequence feature matrix of the optimized topology, < > in the following>Is the transpose of the topological global computing resource timing feature matrix, and +.>Is a distance matrix composed of the distances between every two corresponding row feature vectors of the topological global computing resource time sequence feature matrix,/for>Is a transpose of the distance matrix, +.>Each row vector representing the topological global computing resource timing feature matrix +.>An exponential operation representing a matrix representing a natural exponential function value raised to a power by a characteristic value of each position in the matrix, ">And->Respectively representing dot-by-location multiplication and matrix addition.
6. The water affairs alliance device of claim 5, wherein the normalization processing module is configured to: processing the plurality of classification feature vectors using the classifier in a classification formula to obtain the plurality of probability values; wherein, the classification formula is: Wherein->Representing the plurality of classification feature vectors,weight matrix for full connection layer, +.>Representing the deflection vector of the fully connected layer.
7. The control method of the water affair intelligent joint equipment is characterized by comprising the following steps of: obtaining the residual computing resource amounts of a plurality of preset time points of each server in a distributed server cluster in a preset time period; arranging the residual computing resource amounts of a plurality of preset time points of each server in a preset time period into input vectors according to time dimensions respectively to obtain a plurality of computing resource time sequence input vectors; respectively passing the plurality of computing resource time sequence input vectors through a time sequence feature extractor comprising a first convolution layer and a second convolution layer to obtain a plurality of computing resource time sequence feature vectors; constructing a communication distance topology matrix among all servers in the distributed server cluster; the communication distance topology matrix is passed through a convolutional neural network model serving as a feature extractor to obtain a communication topology feature matrix; the time sequence feature vectors of the plurality of computing resources and the communication topological feature matrix are processed through a graph neural network model to obtain a topological global computing resource time sequence feature matrix; carrying out feature distribution integrity reinforcement on the topological global computing resource time sequence feature matrix to obtain an optimized topological global computing resource time sequence feature matrix; taking each computing resource time sequence feature vector in the computing resource time sequence feature vectors as a query feature vector, and calculating a matrix product between the query feature vector and the optimized topology global computing resource time sequence feature matrix to obtain a plurality of classification feature vectors; the classification feature vectors pass through a classifier to obtain a plurality of probability values, and the probability values are subjected to maximum value-based normalization processing to obtain a plurality of normalized probability values; and allocating the calculation task quantity to each server in the distributed server cluster by taking the plurality of normalized probability values as allocation proportion.
8. The method for controlling a water service intelligent joint device according to claim 7, wherein the first convolution layer and the second convolution layer use one-dimensional convolution kernels having different scales, respectively.
9. The method of controlling a water service intelligent joint device according to claim 8, wherein passing the plurality of computing resource timing input vectors through a timing feature extractor comprising a first convolution layer and a second convolution layer, respectively, to obtain a plurality of computing resource timing feature vectors, comprises: inputting the plurality of computing resource timing input vectors into a first convolution layer of the timing feature extractor to obtain a first scale computing resource feature vector, wherein the first convolution layer has a one-dimensional convolution kernel of a first scale; and cascading the first scale computing resource feature vector and the second scale computing resource feature vector to obtain the plurality of computing resource time sequence feature vectors.
10. The control method of the water service intelligent joint device according to claim 9, wherein passing the communication distance topology matrix through a convolutional neural network model as a feature extractor to obtain a communication topology feature matrix, comprises: and respectively carrying out convolution processing, pooling processing along a channel dimension and nonlinear activation processing on input data in forward transmission of layers by using each layer of the convolutional neural network model serving as the feature extractor, wherein the output of the last layer of the convolutional neural network model serving as the feature extractor is used as the communication topology feature matrix, and the input of the first layer of the convolutional neural network model serving as the feature extractor is used as the communication distance topology matrix.
CN202310620565.9A 2023-05-30 2023-05-30 Water affair intelligent connection equipment and control method thereof Pending CN116578420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310620565.9A CN116578420A (en) 2023-05-30 2023-05-30 Water affair intelligent connection equipment and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310620565.9A CN116578420A (en) 2023-05-30 2023-05-30 Water affair intelligent connection equipment and control method thereof

Publications (1)

Publication Number Publication Date
CN116578420A true CN116578420A (en) 2023-08-11

Family

ID=87543016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310620565.9A Pending CN116578420A (en) 2023-05-30 2023-05-30 Water affair intelligent connection equipment and control method thereof

Country Status (1)

Country Link
CN (1) CN116578420A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274903A (en) * 2023-09-25 2023-12-22 安徽南瑞继远电网技术有限公司 Intelligent early warning device and method for electric power inspection based on intelligent AI chip

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274903A (en) * 2023-09-25 2023-12-22 安徽南瑞继远电网技术有限公司 Intelligent early warning device and method for electric power inspection based on intelligent AI chip
CN117274903B (en) * 2023-09-25 2024-04-19 安徽南瑞继远电网技术有限公司 Intelligent early warning device and method for electric power inspection based on intelligent AI chip

Similar Documents

Publication Publication Date Title
CN105930360B (en) One kind being based on Storm stream calculation frame text index method and system
CN114915629A (en) Information processing method, device, system, electronic equipment and storage medium
CN114416352A (en) Computing resource allocation method and device, electronic equipment and storage medium
CN103310460A (en) Image characteristic extraction method and system
CN116578420A (en) Water affair intelligent connection equipment and control method thereof
CN106569896A (en) Data distribution and parallel processing method and system
CN104008012A (en) High-performance MapReduce realization mechanism based on dynamic migration of virtual machine
CN104618304A (en) Data processing method and data processing system
Gouineau et al. PatchWork, a scalable density-grid clustering algorithm
CN109493077A (en) Activity recognition method and device, electronic equipment, storage medium
Zhengqiao et al. Research on clustering algorithm for massive data based on Hadoop platform
CN112866003A (en) Block chain multi-chain layered collaborative technology system
Borelli et al. Architectural software patterns for the development of IoT smart applications
US10547565B2 (en) Automatic determination and just-in-time acquisition of data for semantic reasoning
Liu et al. Aedfl: efficient asynchronous decentralized federated learning with heterogeneous devices
CN116684274A (en) Cloud security service function chain automatic arrangement system and method based on SDN
CN110378564A (en) Monitoring model generation method, device, terminal device and storage medium
CN113326172B (en) Operation and maintenance knowledge processing method, device and equipment
CN114756301A (en) Log processing method, device and system
Xue et al. Diversified point cloud classification using personalized federated learning
Liu et al. An anomaly detector deployment awareness detection framework based on multi-dimensional resources balancing in cloud platform
Wang et al. High-performance complex event processing for large-scale RFID applications
Wang et al. The research on electric power control center credit monitoring and management using cloud computing and smart workflow
CN116757388B (en) Electric power market clearing method and device based on redundancy constraint screening
US11681545B2 (en) Reducing complexity of workflow graphs through vertex grouping and contraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination