CN112463378A - Server asset scanning method, system, electronic equipment and storage medium - Google Patents

Server asset scanning method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN112463378A
CN112463378A CN202011357445.7A CN202011357445A CN112463378A CN 112463378 A CN112463378 A CN 112463378A CN 202011357445 A CN202011357445 A CN 202011357445A CN 112463378 A CN112463378 A CN 112463378A
Authority
CN
China
Prior art keywords
collector
parameters
performance index
server
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011357445.7A
Other languages
Chinese (zh)
Other versions
CN112463378B (en
Inventor
赵相如
贾伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Inspur Data Technology Co Ltd
Original Assignee
Beijing Inspur Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Inspur Data Technology Co Ltd filed Critical Beijing Inspur Data Technology Co Ltd
Priority to CN202011357445.7A priority Critical patent/CN112463378B/en
Publication of CN112463378A publication Critical patent/CN112463378A/en
Application granted granted Critical
Publication of CN112463378B publication Critical patent/CN112463378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a server asset scanning method, which comprises the following steps: acquiring performance index parameters and load parameters of all collectors; calculating the current period weight of each collector according to the performance index parameters and the load parameters; and setting the collector with the highest weight in the current period as a target collector, and scanning assets of the server by using the target collector. The method and the device can improve the asset scanning efficiency of the server. The application also discloses a server asset scanning system, an electronic device and a storage medium, which have the beneficial effects.

Description

Server asset scanning method, system, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a server asset scanning method, a server asset scanning system, an electronic device, and a storage medium.
Background
Assets added in server management software are obtained by scanning of collectors, and because the assets are directly collected by idle collectors in the existing server management software when the assets are scanned, a CPU or a collector with a high memory load cannot be called when the assets are scanned, so that sometimes the time for the collector to scan the assets is long or the collection fails.
Therefore, how to improve the server asset scanning efficiency is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a server asset scanning method, a server asset scanning system, an electronic device and a storage medium, and the server asset scanning efficiency can be improved.
In order to solve the above technical problem, the present application provides a server asset scanning method, including:
acquiring performance index parameters and load parameters of all collectors;
calculating the current period weight of each collector according to the performance index parameters and the load parameters;
and setting the collector with the highest weight in the current period as a target collector, and scanning assets of the server by using the target collector.
Optionally, after calculating the current cycle weight of each collector according to the performance index parameter and the load parameter, the method further includes:
judging whether the current period weights of all the collectors are the same or not;
if yes, predicting the next period weights of all collectors based on the machine learning model, and setting the collector with the highest next period weight as a target collector;
and if not, executing the operation of setting the collector with the highest current period weight as the target collector.
Optionally, the predicting the next cycle weight of all collectors based on the machine learning model includes:
performing data preprocessing and feature extraction operations on the performance index parameters and the load parameters to obtain feature data;
and converting the characteristic data into a bag-of-words model, comparing words in the bag-of-words model with preset word stems in a database in a hidden layer of the machine learning model through a multi-concurrency mode and a search model algorithm, and determining the next period weight of the collector according to a comparison result.
Optionally, performing data preprocessing and feature extraction on the performance index parameter and the load parameter to obtain feature data, including:
removing punctuation marks in the performance index parameters and the load parameters to obtain parameter texts;
intercepting a text word stem in the parameter text, and storing the text word stem to a target document;
adding preset word stems in the target document so as to expand the number of the word stems in the target document;
and setting all word stems in the target document as the characteristic data.
Optionally, comparing the words in the bag-of-words model with preset stems in a database in a hidden layer of the machine learning model through a multi-concurrency mode and a search model algorithm, including:
and comparing the words in the word bag model with preset word stems in a database in a hidden layer of the machine learning model through a multi-concurrency mode and a search model algorithm based on the length, the sequence and the word stem type of the load parameters.
Optionally, the method further includes:
and acquiring historical performance index parameters and historical load parameters of the collector, and generating a preset word stem in the database according to the historical performance index parameters and the historical load parameters.
Optionally, after predicting the next cycle weights of all collectors based on the machine learning model, the method further includes:
calculating the error rate of the prediction result of the machine learning model through matrix multiplication and the derivative of an activation function sigmoid;
and if the error rate is less than the preset value, executing the operation of setting the collector with the highest weight in the next period as the target collector.
The present application further provides a server asset scanning system, comprising:
the parameter acquisition module is used for acquiring performance index parameters and load parameters of all the collectors;
the weight calculation module is used for calculating the current period weight of each collector according to the performance index parameters and the load parameters;
and the asset scanning module is used for setting the collector with the highest current period weight as a target collector and scanning assets of the server by using the target collector.
The present application also provides a storage medium having stored thereon a computer program that, when executed, performs the steps performed by the above-described server asset scanning method.
The application also provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps executed by the server asset scanning method when calling the computer program in the memory.
The application provides a server asset scanning method, which comprises the following steps: acquiring performance index parameters and load parameters of all collectors; calculating the current period weight of each collector according to the performance index parameters and the load parameters; and setting the collector with the highest weight in the current period as a target collector, and scanning assets of the server by using the target collector.
According to the method, the performance index parameters and the load parameters of all the collectors are obtained firstly, and the current period weight of each collector is calculated according to the performance index parameters and the load parameters. Because the higher the asset scanning efficiency of the collector with the higher current period weight is, the higher the asset scanning efficiency of the collector with the highest current period weight is, the collector with the highest current period weight is set as the target collector, and the target collector is utilized to perform asset scanning on the server, so that the asset scanning efficiency of the server can be improved, and the situations of overlong scanning time and scanning failure can be avoided. The application also provides a server asset scanning system, an electronic device and a storage medium, which have the beneficial effects and are not repeated herein.
Drawings
In order to more clearly illustrate the embodiments of the present application, the drawings needed for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is a flow chart of a server asset scanning method provided by an embodiment of the present application;
fig. 2 is a flowchart of processing a self-adaptive optimal path of a collector according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of model training provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a server asset scanning system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a server asset scanning method according to an embodiment of the present disclosure.
The specific steps may include:
s101: acquiring performance index parameters and load parameters of all collectors;
the collector is software running on the host equipment, the performance index parameters of the collector and the performance index parameters of the host equipment where the collector is located, and the load parameters are parameters for describing the current task amount of the collector. Specifically, the performance index parameters may include parameters such as a CPU utilization rate, a CPU load, a memory utilization rate, a memory load, and a network delay, and the load parameters may be the number of servers that are left in the collector and need to perform asset scanning.
S102: calculating the current period weight of each collector according to the performance index parameters and the load parameters;
in this embodiment, a corresponding sub-weight may be set for each performance index parameter and load parameter, for example, when the CPU utilization is 50%, the sub-weight of the performance index parameter, which is the CPU utilization, may be set to 10; when the CPU utilization is 80%, the sub-weight of the CPU utilization, which is a performance index parameter, may be set to 2. In this embodiment, a corresponding relationship between the value of each performance index parameter and the sub-weight may be preset, the sub-weight corresponding to each performance index parameter and the load parameter is queried based on the corresponding relationship, and the sum of all the sub-weights is used as the current period weight of the collector. The current period weight refers to the weight of the collector in the current collection period, and the larger the value of the current period weight is, the higher the resource scanning efficiency of the collector is.
S103: and setting the collector with the highest weight in the current period as a target collector, and scanning assets of the server by using the target collector.
In this embodiment, the collector with the highest current period weight is selected as the target collector, and the target collector is used to perform asset scanning on the server. The state of the server's CPU, memory, physical disk, logical disk, network card, fan, etc. components may be used when scanning the server assets.
In this embodiment, first, performance index parameters and load parameters of all the collectors are obtained, and a current period weight of each collector is calculated according to the performance index parameters and the load parameters. Because the higher the asset scanning efficiency of the collector with the higher current period weight is, the higher the asset scanning efficiency of the collector with the highest current period weight is, the collector with the highest current period weight is set as the target collector, and the target collector is used for carrying out asset scanning on the server, so that the asset scanning efficiency of the server can be improved, and the situations of overlong scanning time and scanning failure can be avoided.
As a possible implementation manner, after the current period weight of each collector is calculated according to the performance index parameter and the load parameter, whether the current period weights of all the collectors are the same or not may also be determined; if yes, predicting the next period weights of all collectors based on the machine learning model, and setting the collector with the highest next period weight as a target collector; if not, the operation of setting the collector with the highest current period weight as the target collector in S103 is executed.
Further, the above process of predicting the next cycle weights of all collectors based on the machine learning model may include the following operations: performing data preprocessing and feature extraction operations on the performance index parameters and the load parameters to obtain feature data; and converting the characteristic data into a bag-of-words model, comparing words in the bag-of-words model with preset word stems in a database in a hidden layer of the machine learning model through a multi-concurrency mode and a search model algorithm, and determining the next period weight of the collector according to a comparison result. Wherein, the process of generating the preset stem comprises the following steps: and acquiring historical performance index parameters and historical load parameters of the collector, and generating a preset word stem in the database according to the historical performance index parameters and the historical load parameters.
Specifically, the process of performing data preprocessing and feature extraction on the performance index parameter and the load parameter to obtain feature data includes: removing punctuation marks in the performance index parameters and the load parameters to obtain parameter texts; intercepting a text word stem in the parameter text, and storing the text word stem to a target document; adding preset word stems in the target document so as to expand the number of the word stems in the target document; and setting all word stems in the target document as the characteristic data.
Specifically, in the above embodiment, the words in the bag-of-words model may be compared with the preset word stems in the database in a hidden layer of the machine learning model through a multi-concurrency mode and a search model algorithm based on the length, the order, and the word stem type of the load parameter.
Further, after predicting the weights of the next period of all the collectors based on the machine learning model, calculating the error rate of the prediction result of the machine learning model through matrix multiplication and the derivative of the activation function sigmoid; and if the error rate is less than the preset value, executing the operation of setting the collector with the highest weight in the next period as the target collector.
The flow described in the above embodiment is explained below by an embodiment in practical use. Referring to fig. 2, fig. 2 is a flow chart of processing a self-adaptive optimal path of a collector according to an embodiment of the present disclosure, where the embodiment corresponding to fig. 2 may include data definition, data feature extraction and preprocessing, model training and application evaluation stages, and the method may first perform parameter scanning, then determine whether weights of all collectors are consistent, and if not, perform weight determination to adaptively select an optimal collector; and if the weights are consistent, performing data preprocessing, data feature extraction and model training in sequence, and then adaptively selecting an optimal collector according to the output result of the model.
In the embodiment, the optimal path self-adaptive collector method and the machine learning algorithm speculation algorithm are used for selecting the optimal collector, so that added assets can be quickly scanned, the running time of a platform is reduced, and the self loads of all collectors are balanced and reduced. Different parameter indexes, such as CPU utilization, CPU load, memory utilization, memory load, network delay, etc., need to be considered when selecting the optimal path collector. When the assets of the server are collected, the collector needs to scan various components, including a CPU, an internal memory, a physical disk, a logical disk, a network card, a fan and the like, wherein each component has different attributes, for example, the CPU includes a main frequency, a core number, a model, a state, a cache and the like, and at this time, if the collector for scanning the assets has too high load, the problem of long scanning time or scanning failure is caused. The method that does not perform the optimal path judgment cannot effectively utilize the optimal collector whose connection state is normal. In this embodiment, the optimal collector level is specified by using the weight, and if the weights of the current states of the collectors are different, the collector with the lowest load may be preferentially selected by using the adaptive collector method. If the current collector load weights are consistent, a model and data can be trained through a machine learning method, all collector performance index parameters are packaged in a training document as word stems, collector performance indexes needing to be trained are used as input data to be converted into a bag-of-words model, then the data in the bag-of-words model are converted into weights, and which collector in the next stage can be predicted to be in the optimal state through the weights.
By using machine learning through a hidden layer and multi-concurrency method, the state weight of the collector at the next stage can be rapidly judged, and the error rate is reduced by using the repeated iteration of the activation function, so that the optimal probability of the collector is obtained. Thereby greatly improving the efficiency in the asset scanning process.
The various steps in the corresponding embodiment of fig. 2 will now be described in detail:
asset scanning: when the management platform scans assets, the collector finds the slot position of the assets in the cabinet and scans the health, running state and performance parameters of the assets. The server component types comprise a CPU, a memory, a hard disk, a Power supply, a fan, a BMC network port, a Raid card, a PCIE and the like, and the attributes of each component comprise a system Health state (Health), a Model (Model), a Power (Power), a utilization rate (CpuUage), an index (index), a state (status) and the like.
Data preprocessing and feature extraction: when the performance loads of all collectors and the loads of the collectors are consistent, the loads of all collectors in the next stage need to be predicted, and whether the collectors in the next stage are in the optimal state or not needs to be predicted by taking the current index parameters and the loads of the collectors as parameters through a training model. And removing symbols in the middle of the parameters, intercepting the word stems and storing the word stems into a training document. The subsequent training data to be added needs to be compared with the training documents, the comparison result is converted into a word bag model which is composed of '0' and '1', the model matching result is converted into weight for storage, and the number 1 indicates that the position index is equal to the stem database index. And classifying and judging the data added to the training document subsequently, and associating all the parts with the load information. And finally, storing all matching results in the training model.
A model training stage: the input layer converts parameters needing training into a bag-of-words model, compares the input parameters with the word stems in the hidden layer by using a multi-concurrency mode and a search model algorithm according to the conditions of length, sequence, word stem type and the like of load parameters, and stores all comparison results. And acquiring the load weight output by the training model. Error rate measurement is carried out by using fast matrix multiplication in numty and a derivative of an activation function sigmoid, and the process is iterated repeatedly by using multiple groups of data, so that the error rate is reduced. Referring to fig. 3, fig. 3 is a flowchart of model training according to an embodiment of the present disclosure. In the model training process, firstly, performance index parameters and the self-load of the collector are collected, and the actually collected CPU load, memory load, CPU utilization rate, memory utilization rate, network delay, self-load of the collector and the word stem library are compared to obtain a word bag model. In the machine learning process, the collector load index is led into an input layer, the collector part attributes are concurrently matched in a hidden layer, and the current period weight of the collector is output in an output layer.
The embodiment provides a set of optimal path self-adaptive collector method and a machine learning algorithm speculation algorithm to select an optimal collector, so that added assets can be quickly scanned, the running time of a platform is reduced, and the self loads of all collectors are balanced and reduced. The embodiment reduces the error rate by the training process through repeated iteration by using matrix multiplication operation and an activation function. In the embodiment, the load collector load indexes obtained by input and sorting are compared with the word stems by utilizing a multi-concurrency mode and a search model algorithm according to conditions such as the length, the sequence and the word stem types of the collector load indexes. In the embodiment, all collector performance index parameters are packaged in a training document as word stems, collector performance indexes needing to be trained are used as input data and are converted into a bag-of-words model, then the data in the bag-of-words model are converted into weights, and the weights are used for predicting which collector becomes the optimal state in the next stage.
In the embodiment, a set of adaptive optimal path collector processes and some optimization techniques capable of reducing the calculated amount are designed for the machine learning training process, and a collector with the fastest current scanning assets can be judged by a collector CPU, the memory utilization rate and the load and the self load weight of the collector. When the load weights are consistent, the probability of the load weight of the collector at the next stage can be obtained through the existing training model, compared with a naive Bayes algorithm, the training method of a convolution layer, an activation layer and a pooling layer in a hidden layer is added, and analysis and budget are simultaneously carried out from performance, network delay, the load of the collector and the like through a multi-concurrency mode, so that the asset scanning efficiency of a server management platform is improved, and the system load is balanced. In the process of training the model, error rate measurement is carried out by using fast matrix multiplication in numty and the derivative of the activation function sigmoid, and the error rate is reduced to the minimum after repeated iteration.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a server asset scanning system according to an embodiment of the present disclosure;
the system may include:
a parameter obtaining module 100, configured to obtain performance index parameters and load parameters of all the collectors;
a weight calculation module 200, configured to calculate a current period weight of each collector according to the performance index parameter and the load parameter;
and the asset scanning module 300 is configured to set the collector with the highest current period weight as a target collector, and perform asset scanning on the server by using the target collector.
In this embodiment, first, performance index parameters and load parameters of all the collectors are obtained, and a current period weight of each collector is calculated according to the performance index parameters and the load parameters. Because the higher the asset scanning efficiency of the collector with the higher current period weight is, the higher the asset scanning efficiency of the collector with the highest current period weight is, the collector with the highest current period weight is set as the target collector, and the target collector is used for carrying out asset scanning on the server, so that the asset scanning efficiency of the server can be improved, and the situations of overlong scanning time and scanning failure can be avoided.
Further, the method also comprises the following steps:
the weight judgment module is used for judging whether the current period weights of all the collectors are the same or not after the current period weight of each collector is calculated according to the performance index parameter and the load parameter; if not, starting the workflow corresponding to the asset scanning module 300.
The collector selecting module predicts the weights of all collectors in the next period based on the machine learning model if the weights of all collectors in the current period are the same, and sets the collector with the highest weight in the next period as a target collector;
further, the collector selecting module comprises:
the characteristic extraction unit is used for carrying out data preprocessing and characteristic extraction operation on the performance index parameters and the load parameters to obtain characteristic data;
and the weight prediction unit is used for converting the characteristic data into a bag-of-words model, comparing the words in the bag-of-words model with a preset word stem in a database in a hidden layer of the machine learning model through a multi-concurrency mode and a search model algorithm, and determining the next period weight of the collector according to a comparison result.
Further, the feature extraction unit is configured to remove punctuation marks in the performance index parameter and the load parameter to obtain a parameter text; the system is also used for intercepting a text word stem in the parameter text and storing the text word stem to a target document; the method is also used for adding preset word stems in the target document so as to expand the number of the word stems in the target document; and the system is also used for setting all word stems in the target document as the characteristic data.
Further, the weight prediction unit is used for comparing the words in the bag-of-words model with preset word stems in a database through a multi-concurrency mode and a search model algorithm in a hidden layer of the machine learning model based on the length, the sequence and the word stem type of the load parameters.
Further, the method also comprises the following steps:
and the preset stem generation module is used for acquiring the historical performance index parameters and the historical load parameters of the collector and generating the preset stems in the database according to the historical performance index parameters and the historical load parameters.
Further, the method also comprises the following steps:
the error rate calculation module is used for calculating the error rate of the prediction result of the machine learning model through matrix multiplication and the derivative of the activation function sigmoid after predicting the weights of the next period of all the collectors based on the machine learning model; and the operation of setting the collector with the highest weight in the next period as the target collector is executed if the error rate is smaller than the preset value.
Since the embodiment of the system part corresponds to the embodiment of the method part, the embodiment of the system part is described with reference to the embodiment of the method part, and is not repeated here.
The present application also provides a storage medium having a computer program stored thereon, which when executed, may implement the steps provided by the above-described embodiments. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The application further provides an electronic device, which may include a memory and a processor, where the memory stores a computer program, and the processor may implement the steps provided by the foregoing embodiments when calling the computer program in the memory. Of course, the electronic device may also include various network interfaces, power supplies, and the like.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A server asset scanning method, comprising:
acquiring performance index parameters and load parameters of all collectors;
calculating the current period weight of each collector according to the performance index parameters and the load parameters;
and setting the collector with the highest weight in the current period as a target collector, and scanning assets of the server by using the target collector.
2. The server asset scanning method according to claim 1, further comprising, after calculating the current period weight of each collector according to the performance metric parameter and the load parameter:
judging whether the current period weights of all the collectors are the same or not;
if yes, predicting the next period weights of all collectors based on the machine learning model, and setting the collector with the highest next period weight as a target collector;
and if not, executing the operation of setting the collector with the highest current period weight as the target collector.
3. The server asset scanning method of claim 2, wherein predicting next cycle weights for all collectors based on the machine learning model comprises:
performing data preprocessing and feature extraction operations on the performance index parameters and the load parameters to obtain feature data;
and converting the characteristic data into a bag-of-words model, comparing words in the bag-of-words model with preset word stems in a database in a hidden layer of the machine learning model through a multi-concurrency mode and a search model algorithm, and determining the next period weight of the collector according to a comparison result.
4. The server asset scanning method according to claim 3, wherein performing data preprocessing and feature extraction operations on the performance index parameter and the load parameter to obtain feature data comprises:
removing punctuation marks in the performance index parameters and the load parameters to obtain parameter texts;
intercepting a text word stem in the parameter text, and storing the text word stem to a target document;
adding preset word stems in the target document so as to expand the number of the word stems in the target document;
and setting all word stems in the target document as the characteristic data.
5. The server asset scanning method according to claim 3, wherein comparing the words in the bag of words model with the preset stems in the database by a multi-concurrency mode and a search model algorithm in a hidden layer of the machine learning model comprises:
and comparing the words in the word bag model with preset word stems in a database in a hidden layer of the machine learning model through a multi-concurrency mode and a search model algorithm based on the length, the sequence and the word stem type of the load parameters.
6. The server asset scanning method of claim 3, further comprising:
and acquiring historical performance index parameters and historical load parameters of the collector, and generating a preset word stem in the database according to the historical performance index parameters and the historical load parameters.
7. The server asset scanning method of claim 2, further comprising, after predicting the next cycle weights for all collectors based on the machine learning model:
calculating the error rate of the prediction result of the machine learning model through matrix multiplication and the derivative of an activation function sigmoid;
and if the error rate is less than the preset value, executing the operation of setting the collector with the highest weight in the next period as the target collector.
8. A server asset scanning system, comprising:
the parameter acquisition module is used for acquiring performance index parameters and load parameters of all the collectors;
the weight calculation module is used for calculating the current period weight of each collector according to the performance index parameters and the load parameters;
and the asset scanning module is used for setting the collector with the highest current period weight as a target collector and scanning assets of the server by using the target collector.
9. An electronic device comprising a memory having a computer program stored therein and a processor that implements the steps of the server asset scanning method of any of claims 1 to 7 when the processor invokes the computer program in the memory.
10. A storage medium having stored thereon computer-executable instructions which, when loaded and executed by a processor, carry out the steps of a server asset scanning method according to any one of claims 1 to 7.
CN202011357445.7A 2020-11-27 2020-11-27 Server asset scanning method, system, electronic equipment and storage medium Active CN112463378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011357445.7A CN112463378B (en) 2020-11-27 2020-11-27 Server asset scanning method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011357445.7A CN112463378B (en) 2020-11-27 2020-11-27 Server asset scanning method, system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112463378A true CN112463378A (en) 2021-03-09
CN112463378B CN112463378B (en) 2023-12-22

Family

ID=74809078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011357445.7A Active CN112463378B (en) 2020-11-27 2020-11-27 Server asset scanning method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112463378B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821421A (en) * 2021-08-27 2021-12-21 济南浪潮数据技术有限公司 Server performance data acquisition method, system, device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050267929A1 (en) * 2004-06-01 2005-12-01 Hitachi, Ltd. Method of dynamically balancing workload of a storage system
US20070061266A1 (en) * 2005-02-01 2007-03-15 Moore James F Security systems and methods for use with structured and unstructured data
CN102571854A (en) * 2010-12-17 2012-07-11 北大方正集团有限公司 Network data acquisition method and device
CN105187512A (en) * 2015-08-13 2015-12-23 航天恒星科技有限公司 Method and system for load balancing of virtual machine clusters
WO2017011818A1 (en) * 2015-07-16 2017-01-19 Blast Motion Inc. Sensor and media event detection and tagging system
CN108234170A (en) * 2016-12-15 2018-06-29 北京神州泰岳软件股份有限公司 The monitoring method and device of a kind of server cluster
CN108449394A (en) * 2018-03-05 2018-08-24 北京华夏电通科技有限公司 A kind of dispatching method of data file, dispatch server and storage medium
WO2019210820A1 (en) * 2018-05-03 2019-11-07 华为技术有限公司 Information output method and apparatus
WO2020024186A1 (en) * 2018-08-01 2020-02-06 西门子(中国)有限公司 Distributed data acquisition system and method
CN111104220A (en) * 2019-12-06 2020-05-05 北京浪潮数据技术有限公司 Arm architecture-based server configuration method, system and related equipment
CN111626359A (en) * 2020-05-27 2020-09-04 上海船舶研究设计院(中国船舶工业集团公司第六0四研究院) Data fusion method and device, control terminal and ship

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050267929A1 (en) * 2004-06-01 2005-12-01 Hitachi, Ltd. Method of dynamically balancing workload of a storage system
US20070061266A1 (en) * 2005-02-01 2007-03-15 Moore James F Security systems and methods for use with structured and unstructured data
CN102571854A (en) * 2010-12-17 2012-07-11 北大方正集团有限公司 Network data acquisition method and device
WO2017011818A1 (en) * 2015-07-16 2017-01-19 Blast Motion Inc. Sensor and media event detection and tagging system
CN105187512A (en) * 2015-08-13 2015-12-23 航天恒星科技有限公司 Method and system for load balancing of virtual machine clusters
CN108234170A (en) * 2016-12-15 2018-06-29 北京神州泰岳软件股份有限公司 The monitoring method and device of a kind of server cluster
CN108449394A (en) * 2018-03-05 2018-08-24 北京华夏电通科技有限公司 A kind of dispatching method of data file, dispatch server and storage medium
WO2019210820A1 (en) * 2018-05-03 2019-11-07 华为技术有限公司 Information output method and apparatus
WO2020024186A1 (en) * 2018-08-01 2020-02-06 西门子(中国)有限公司 Distributed data acquisition system and method
CN111104220A (en) * 2019-12-06 2020-05-05 北京浪潮数据技术有限公司 Arm architecture-based server configuration method, system and related equipment
CN111626359A (en) * 2020-05-27 2020-09-04 上海船舶研究设计院(中国船舶工业集团公司第六0四研究院) Data fusion method and device, control terminal and ship

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821421A (en) * 2021-08-27 2021-12-21 济南浪潮数据技术有限公司 Server performance data acquisition method, system, device and storage medium
CN113821421B (en) * 2021-08-27 2023-12-22 济南浪潮数据技术有限公司 Method, system, device and storage medium for collecting server performance data

Also Published As

Publication number Publication date
CN112463378B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
US20200401939A1 (en) Systems and methods for preparing data for use by machine learning algorithms
CN104281664A (en) Data segmenting method and system of distributed graph calculating system
CA3040505A1 (en) Feature generation and feature selection for machine learning tool
WO2010048758A1 (en) Classification of a document according to a weighted search tree created by genetic algorithms
US9053434B2 (en) Determining an obverse weight
CN117556369B (en) Power theft detection method and system for dynamically generated residual error graph convolution neural network
CN112463378B (en) Server asset scanning method, system, electronic equipment and storage medium
CN117149293B (en) Personalized configuration method for operating system
CN113918807A (en) Data recommendation method and device, computing equipment and computer-readable storage medium
CN110825873B (en) Method and device for expanding log exception classification rule
JP2010272004A (en) Discriminating apparatus, discrimination method, and computer program
CN110580265A (en) ETL task processing method, device, equipment and storage medium
Chow et al. A new feature selection scheme using a data distribution factor for unsupervised nominal data
CN115858648A (en) Database generation method, data stream segmentation method, device, equipment and medium
CN105786791B (en) Data subject acquisition methods and device
Kaedi et al. Holographic memory-based Bayesian optimization algorithm (HM-BOA) in dynamic environments
US20230281275A1 (en) Identification method and information processing device
CN117194275B (en) Automatic software automatic test plan generation method and system based on intelligent algorithm
CN117271098B (en) AI model calculation core scheduling method, device, equipment and storage medium
US11645043B2 (en) Method and system for calculating minwise hash signatures from weighted sets
CN117971832A (en) Index management method and device
CN117993579A (en) Project declaration flow self-adaptive optimization method, device, equipment and storage medium
AU2022292565A1 (en) Efficient cross-platform serving of deep neural networks for low latency applications
CN115543792A (en) Performance analysis method and equipment of software system
CN114297385A (en) Model training method, text classification method, system, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant