CN111176834A - Automatic scaling strategy operation and maintenance method, system and readable storage medium - Google Patents

Automatic scaling strategy operation and maintenance method, system and readable storage medium Download PDF

Info

Publication number
CN111176834A
CN111176834A CN201911250556.5A CN201911250556A CN111176834A CN 111176834 A CN111176834 A CN 111176834A CN 201911250556 A CN201911250556 A CN 201911250556A CN 111176834 A CN111176834 A CN 111176834A
Authority
CN
China
Prior art keywords
module
information
carrying capacity
functional module
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911250556.5A
Other languages
Chinese (zh)
Inventor
罗柏发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201911250556.5A priority Critical patent/CN111176834A/en
Publication of CN111176834A publication Critical patent/CN111176834A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Abstract

The invention relates to the field of system framework research and development, and discloses an automatic scaling strategy operation and maintenance method, a system and a readable storage medium, wherein the method comprises the following steps: acquiring function information, function module information and load information of a system; acquiring an influence factor of each function according to the function information; calculating the carrying capacity of the corresponding functional module according to the influence factors and the load information; judging whether the carrying capacity meets a preset change condition; and if so, performing node amplification or contraction on the functional module. The carrying capacity of the functional module is calculated through the module influence factor and the load information, and dynamic adjustment is carried out according to the carrying capacity, so that resources can be reasonably utilized in a larger mode. The utilization rate of various hardware such as a CPU of the module is obtained, the carrying capacity value is obtained by calculating through different weighting factors, the carrying capacity of the module can be reflected more directly, and therefore timely adjustment is carried out.

Description

Automatic scaling strategy operation and maintenance method, system and readable storage medium
Technical Field
The invention relates to the field of development of computer system frameworks, in particular to an automatic scaling strategy operation and maintenance method, system and readable storage medium.
Background
Docker is an open-source application container engine, so that developers can package applications and dependency packages into a portable container and then distribute the portable container to any popular Linux machine, and virtualization can be realized. Containers are fully sandboxed, have no interfaces to each other, have little performance overhead, and can be easily run in machines and data centers. Docker, as the most popular container-level virtualization technology at present, has the advantages of light weight, flexibility, high starting speed and the like, is naturally suitable for realizing system elasticity, and many data centers have already realized automatic expansion and contraction by deploying services on Docker at present.
At present, the operation and maintenance of the physical machine and the Docker technology are basically only two types: vertical expansion and horizontal expansion. Both the two modes can not dynamically make an operation and maintenance strategy according to the use condition of an actual machine, and the expansibility, the resource utilization rate and the reliability of a cluster are difficult to be considered in the management and scheduling processes. In the face of the current practical conditions of increasing service types, huge burst access amount and the like, the two task scheduling methods have the defect of poor elasticity and cannot be dynamically adjusted according to the service request amount and the load.
Disclosure of Invention
In order to solve at least one technical problem, the invention provides an automatic scaling strategy operation and maintenance method, which comprises the following steps:
acquiring function information, function module information and load information of a system;
acquiring an influence factor of each function according to the function information;
calculating the carrying capacity of a function module corresponding to each function according to the influence factor of each function and the obtained load information of the system, and setting different function modules aiming at different functions, wherein the function modules comprise physical servers or/and node modules realized by virtual machines;
judging whether the carrying capacity meets a preset change condition;
if the condition is met, when the functional module is overloaded, the node of the functional module is expanded, and when the resource redundancy of the functional module is wasted, the node of the functional module is contracted.
In this embodiment, the step of calculating the carrying capacity of the corresponding functional module according to the impact factor and the load information includes:
the load information comprises the utilization rates of the CPU, the memory and the disk of the functional module, and the utilization rates of the CPU, the memory and the disk of the functional module are obtained;
respectively acquiring weighting factors of the utilization rates of a CPU, a memory and a disk;
calculating the carrying capacity of the function module by the following formula:
S=w×(w1C+w2R+w3D)
wherein S is the carrying capacity value of the function module, w is the influence factor of the function module, w1Weighting factors for CPU usage, w2As a weighting factor for memory usage, w3Is a weighting factor of disk usage, C is CPU usage, R is memory usage, D is disk usage, where w is1+w2+w3=1。
In this embodiment, the preset changing condition includes: the carrying capacity is larger than an early warning threshold value and smaller than a redundancy threshold value, and when the carrying capacity is larger than the early warning threshold value, node amplification is carried out on the functional module; and when the carrying capacity is smaller than the redundancy threshold value, performing node contraction on the functional module.
In this embodiment, the method further includes:
receiving a module start or stop request;
and starting or stopping the mirror images of the modules according to the starting or stopping request, wherein different mirror images are established for different functional modules, a corresponding docker container is configured in each mirror image, and the functional modules run through the docker containers.
In this embodiment, the method further includes:
acquiring the running number information of the functional modules;
judging whether the quantity information exceeds a preset quantity range or not;
and if the number is larger than the maximum value of the number range, stopping the operation of the functional module and sending alarm information to the background.
In the scheme, the step of performing node amplification or contraction on the functional module comprises the following steps:
acquiring the step length of the expansion or contraction of the functional module;
according to the step length, amplifying or shrinking the corresponding functional module;
calculating the carrying capacity of the changed function module;
judging whether the carrying capacity of the changed functional module is in a preset change condition or not;
if the conditions are changed, continuing to amplify or shrink according to the step length; if the condition is not changed, the operation is stopped.
In this scheme, before acquiring function information, function module information, and load information of the system, the method further includes:
dividing an original system into different subsystems according to functions;
establishing mirror image subsystem files for each subsystem;
deploying a system on an application container engine;
and receiving a selection instruction of a user, and loading system resources preset in the public cloud platform into the target system.
In this embodiment, after performing node expansion or node contraction on the functional module, the method further includes:
establishing a big data platform, collecting the daily report information of the fleet nationwide and/or regional, and obtaining a corresponding report through big data comparison, analysis and modeling;
the big data vehicle-mounted monitoring system is established, motorcade safety information can be transmitted, existing dangers can be analyzed through a big data platform, and motorcade can be reminded in time;
establishing a motorcade safety report storage platform and setting a downloading interface;
and establishing a front-end friendly report display area, and displaying the report data in different areas according to different characteristics.
The second aspect of the present invention further provides an automatic scaling policy operation and maintenance system, including: the automatic scaling strategy operation and maintenance method comprises a memory and a processor, wherein the memory comprises an automatic scaling strategy operation and maintenance program, and the automatic scaling strategy operation and maintenance program is executed by the processor to realize the steps of the automatic scaling strategy operation and maintenance method.
The third aspect of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes an automatic scaling policy operation and maintenance program, and when the automatic scaling policy operation and maintenance program is executed by a processor, the method implements the steps of the automatic scaling policy operation and maintenance method described in any one of the above.
According to the automatic scaling strategy operation and maintenance method, system and readable storage medium, the carrying capacity of the functional module is calculated through the module influence factor and the load information, and dynamic adjustment is performed according to the carrying capacity, so that resources can be reasonably utilized in a larger mode. The utilization rate of various hardware such as a CPU of the module is obtained, the carrying capacity value is obtained by calculating through different weighting factors, the carrying capacity of the module can be reflected more directly, and therefore timely adjustment is carried out. The communication between different modules of the invention is communicated through the middleware, thereby achieving the effect of loose coupling.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart illustrating an automatic scaling strategy operation and maintenance method of the present invention;
FIG. 2 is a flow chart of a method for establishing a cloud computing platform based system architecture according to the present invention;
FIG. 3 is a flow chart illustrating the present invention determining that a predetermined modification condition is satisfied;
FIG. 4 shows a flow chart of the functional module expansion or contraction of the present invention;
FIG. 5 illustrates a flow chart of a method of analyzing fleet safety reports based on big data in accordance with the present invention;
FIG. 6 is a block diagram illustrating the auto scaling policy operation and maintenance system of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
FIG. 1 is a flow chart illustrating the method for automatically scaling a policy operation and maintenance in accordance with the present invention.
As shown in fig. 1, the present invention discloses an automatic scaling strategy operation and maintenance method, which comprises:
s102, acquiring function information, function module information and load information of a system;
s104, acquiring the influence factor of each function according to the function information;
s106, calculating the carrying capacity of the corresponding functional module according to the influence factors and the load information;
s108, judging whether the carrying capacity meets a preset change condition;
and S110, if so, performing node expansion or contraction on the functional module.
It should be noted that, in the present invention, different function modules are set for different functions, each function module performs communication and connection through a middleware, and the middleware may be a scheduling unit or a background operation server, etc., and is connected to each function module, so as to implement data transceiving and perform dynamic scheduling of resources. Each functional module can be a physical server or a node module realized by a virtual machine, and when the physical server is adopted, each physical server operates and processes the functions of the modules; when virtual machine nodes are employed, multiple virtual machine nodes may be deployed in one or more physical servers, with each virtual machine node corresponding to a module. The virtual machine module node can realize the expansion of the system by simply adding a general server through task scheduling realized based on software, and can even improve the resource utilization rate of the system by using a virtualization technology.
Specifically, the invention first obtains the function information, the function module information and the load information in the system. The function information is related information of functions that can be implemented in the system, for example, the function information may include functions of processing of big data, cloud computing, search query, and the like. The function module information is a function module node which already exists in the system, preferably, the function module node in the invention is a virtual machine function node, and the acquired function module information comprises information such as functions, operational capabilities and the like which are to be realized by the virtual machine node, for example, the function of using a big data processing virtual machine as a function module is to collect data for analysis and processing, and summarize rules and trends; the computing power is higher grade, and more resources need to be allocated. The load information is related information which can reflect the module load, such as the CPU, the memory, the disk utilization rate and the like of the functional module. And after the information is acquired, calculating the carrying capacity of the corresponding functional module according to the influence factor and the load information. The carrying capacity is the current processing capacity of the functional module, the larger the carrying capacity is, the closer the functional module is to saturation, and the smaller the carrying capacity is, the more redundant resources exist. And after the carrying capacity is obtained through calculation, judging whether the carrying capacity meets a preset change condition. The preset changing condition may be a condition that dynamically changes according to real-time physical resources, or a changing condition that is preset by a worker. The change condition is a range value exceeding the set carrying capacity, the range value consists of a minimum value and a maximum value, and when the carrying capacity value is within the range value, the function module is indicated to operate normally without change; when the carrying capacity value is greater than or less than the range value, it indicates that the function module may have overload or resource redundancy and waste, and needs to be dynamically adjusted. Preferably, when the carrying capacity is in a range of 30% -90%, that is, the carrying capacity is less than 30%, it indicates that resource redundancy and waste exist in the functional module; when the carrying capacity exceeds 90%, the functional module is overloaded.
As shown in fig. 2, before acquiring the function information, the function module information, and the load information of the system, the method further includes:
s202, dividing an original system into different subsystems according to functions;
s204, mirror image subsystem files are formulated for each subsystem;
s206, deploying the system on the application container engine;
and S208, receiving a selection instruction of a user, and loading system resources preset in the public cloud platform into the target system.
It is understood that a system can be divided by function into: the system comprises a processor management subsystem, a job management subsystem, a memory management subsystem, a device management subsystem, a file management subsystem, a network security management subsystem and the like.
It should be noted that, taking Docker as an example, Docker uses a client-server (C/S) architecture mode, and uses a remote API to manage and create a Docker container. The Docker container is created by Docker mirroring. The container to mirror relationship is similar to objects and classes in object-oriented programming.
Docker employs the C/S architecture Dockerdaemon as a server to accept requests from clients and process those requests (create, run, distribute containers). The client and the server can run on one machine, and can communicate through a socket or a RESTful API. The dockedaemon generally runs in the background of the hosting host, waiting to receive messages from the client. The Docker client then provides the user with a series of executable commands with which the user interacts with the dockedaemon.
It should be noted that the user may choose to deploy all or part of the subsystems on the application container engine.
According to an embodiment of the invention, deploying the system on the application container engine further comprises the steps of:
step one, acquiring a target script program, and calling a target file based on the target script program, wherein the target file comprises: the image files of the subsystems are used for constructing a first configuration file of the Docker container, a second configuration file for initializing the Docker container, a configuration file of a service component of the target system and a node configuration file of the target system.
Firstly, obtaining a target script program, calling a target file based on the target script program, then determining an operating system of the target system, a target Docker container and a blueprint template of the target system based on the target file, and finally deploying service components corresponding to configuration files of the target system, the target Docker container and the service components on the target data platform according to the blueprint template to obtain a target big data platform.
The calling the target file based on the target script program specifically includes: calling the mirror image file of the subsystem based on a first subprogram in the target script program; calling the first configuration file and the second configuration file based on a second subprogram in the target script program; and calling the configuration file of the service component and the node configuration file based on a third subprogram in the target script program.
The first sub program is used to call the image file (Dockerfile file) of each subsystem completed by the user in step S2, the image file of each subsystem is used to establish the operating system of the target system, and the image file of each subsystem is used as a basic component for deploying the target system in the Docer. For example, the target system may be a centros 6.8 system, and the basic components include sshd service components, ssl components, ambari-server components, ambari-agent components, and the like.
The second subprogram is used for calling a first configuration file written by a user and used for building a Docker container and a second configuration file used for initializing the Docker container. The first configuration file for constructing the Docker container finally includes configuration information such as ip network segment address configuration information, port mapping relationship configuration information, CPU configuration information, memory allocation configuration information and the like in the big data platform. The second configuration file for initializing the Docker container includes configuration information for initializing and configuring the Docker container.
The third subprogram is used for calling the configuration file of the service component of the target system and the node configuration file of the target system which become written by personnel. The configuration file of the service component of the target system comprises configuration information of components such as a Datanode component, a Namenode component, a Zookeeper component and a Spark component. The node configuration file of the target system comprises configuration information of nodes such as each Docker node, component node, host node and the like in the target system under the Docker environment.
And secondly, determining a target Docker container and a blueprint template of the target system based on the target file, wherein the target Docker container is a Docker container which is initialized and configured.
The second step specifically further comprises: determining the target Docker container based on the first configuration file and the second configuration file; determining the blueprint template based on the configuration file of the service component and the node configuration file.
The method comprises the steps of determining a host machine directory according to the image files of the subsystems acquired by the first subprogram, and determining the host machine directory by performing a determination process on the image files of the subsystems, so that the problem that the image files of a plurality of subsystems are too large due to the fact that the image file of one subsystem needs to be written for each service component in the conventional method for deploying the target system is solved.
And constructing a Docker container which completes initialization configuration according to the first configuration file which is acquired by the second subprogram and used for constructing the Docker container and the second configuration file which is used for initializing the Docker container, wherein the Docker container which completes initialization configuration is mounted in a host directory in the operating system.
A user can compile a first configuration file for constructing the Docker container according to actual conditions, so that the technical effects that the Docker container IP can be controlled, the modification of the Docker container IP is supported, the Docker container IP is automatically controlled, and network segment conflict is prevented are achieved.
Constructing a blueprint template of the target system according to the configuration file of the service component of the target system and the node configuration file of the big data platform to be deployed, which are acquired by the third subprogram,
the user can automatically expand and reduce the target system according to a certain strategy, for example, the system user can write the configuration file of the service component of the target system according to the actual situation, thereby achieving the technical effect of dynamic capacity expansion of the target system.
And step three, deploying the service components corresponding to the subsystems, the target Docker container and the configuration files of the service components according to the blueprint template to obtain the target system.
Determining the target Docker container based on the first configuration file and the second configuration file specifically includes: determining a Docker container based on the first configuration file; and performing initialization configuration on the Docker container based on the second configuration file to obtain the target Docker container.
Determining Docker containers required by the target system based on configuration information in the first configuration file under the control of the target script program, wherein the data volume of the Docker containers can be one or more, and the first configuration file can be written by a user according to actual conditions, so that the specific number of the Docker containers is determined. And then, performing initialization configuration on the Docker container based on the second configuration file under the control of the object program script, and further obtaining the Docker container with the initialization configuration completed.
According to the embodiment of the invention, the system resources in the public cloud platform can be stored in advance by a supplier or uploaded by each user with authority. The local user can select the required system resource in the public cloud platform according to actual needs, and the system resource is loaded into the target system, so that resource sharing is realized.
According to the embodiment of the present invention, the calculating the carrying capacity of the corresponding functional module according to the impact factor and the load information specifically includes:
acquiring the utilization rates of a CPU, a memory and a disk of a functional module, wherein the load information comprises the utilization rates of the CPU, the memory and the disk of the functional module;
respectively acquiring weighting factors of the utilization rates of a CPU, a memory and a disk;
calculating the carrying capacity of the function module by the following formula:
S=w×(w1C+w2R+w3D)
wherein S is the carrying capacity value of the function module, w is the influence factor of the function module, w1Weighting factors for CPU usage, w2As a weighting factor for memory usage, w3Is a weighting factor of disk usage, C is CPU usage, R is memory usage, D is disk usage, where w is1+w2+w3=1。
It should be noted that the load information of the function module is obtained, and includes information such as CPU, memory, and disk usage of the function module. The CPU, the memory and the disk utilization rate correspond to different weighting factors, the functional module also corresponds to an influence factor, and the formula is adopted for calculation after the utilization rate, the weighting factor and the influence factor are obtained. Wherein, w1+w2+w31. Preferably, w1A value of 0.4, w2A value of 0.4, w3The value is 0.2. The influence factor of a functional module represents the importance of each module, and can be said to be an expression of priority. Different function modules have different influence factors, for example, the influence factor of the big data processing module is 0.4, the influence factor of the cloud computing module is 0.3, and the influence factor of the search query function module is 0.6. The technician can set the influence factor of each module according to actual needs, and the sum of the influence factors of each module can be unequal to 1, so that when the function needs to be increased, the influence factor values of other modules do not need to be changed.
Fig. 3 is a flow chart illustrating the present invention determining that a preset change condition is satisfied.
As shown in fig. 3, according to the embodiment of the present invention, determining whether the carrying capacity meets a preset change condition, and after the preset change condition is met, the node change of the functional module includes:
s302, judging whether the carrying capacity is larger than an early warning threshold value;
s304, if the number of the nodes is larger than the early warning threshold value, performing node amplification on the functional module;
s306, judging whether the carrying capacity is smaller than a redundancy threshold value;
and S308, if the value is smaller than the redundancy threshold value, performing node contraction on the functional module.
It should be noted that the modification condition is a set range value of the carrying capacity, which is composed of a minimum value and a maximum value, and when the carrying capacity value is within the range value, it indicates that the function module is operating normally, and may not be changed; when the carrying capacity value exceeds or falls outside the range value, it indicates that the function module may be overloaded or wasted in resource redundancy, and needs to be dynamically adjusted. The maximum value is the early warning threshold value, and exceeding the early warning threshold value indicates that the functional module is close to saturation, and if the carrying capacity is continuously increased, the functional module may be down and cannot continuously process the service. The minimum value is a redundancy threshold value, and if the minimum value is smaller than the redundancy threshold value, the functional module has a large amount of resources which are not utilized at the moment, so that the situation of resource redundancy exists, and the resource waste is caused. Therefore, whether the carrying capacity is larger than an early warning threshold value is judged; and if the number of the nodes is larger than the early warning threshold value, performing node amplification on the functional module. Judging whether the carrying capacity is smaller than a redundancy threshold value; and if the node is smaller than the redundancy threshold, performing node contraction on the functional module. By judging the carrying capacity, the resources of the functional module can be dynamically adjusted, and the resources can be utilized to the maximum extent.
According to the embodiment of the invention, the method further comprises the following steps:
receiving a module start or stop request;
starting or stopping the mirroring of the module according to the start or stop request, wherein,
establishing different mirror images aiming at different functional modules;
configuring a corresponding docker container in each mirror image;
the functional modules are operated via a docker vessel.
It should be noted that the functional module of the present invention employs virtual machine nodes, each virtual machine node is installed and established with a different mirror image, each mirror image is also provided with a corresponding docker container, and the functional module operates in the docker container. The adoption of the docker container can facilitate the replacement and the change of functions, and the docker container can facilitate the deletion and the addition. When the functional module is to be started, the module starting or stopping request is received through the resource scheduling middleware; and starting or stopping the mirror image of the module according to the starting or stopping request. And after the mirror image is started, loading the docker container to finish the operation of the functional module. The whole process is realized by software scheduling, so that the method is quicker and more convenient, the resource scheduling time is shorter, and the efficiency is higher.
According to the embodiment of the invention, the method further comprises the following steps:
acquiring the running number information of the functional modules;
judging whether the quantity information exceeds a preset quantity range or not;
and if the number is larger than the maximum value of the number range, stopping the operation of the functional module and sending alarm information to the background.
It should be noted that the present invention may also limit the number of functional modules in operation, and excessive functional modules may cause resource overload of the physical machine, which is not favorable for service processing. It is necessary to limit the number of functional modules so that they can be in a stable operating state. Firstly, the running quantity information of the functional modules is obtained, and the quantity of the modules can be determined by the number of the loaded mirror images or the loaded docker containers. Judging whether the quantity information exceeds a preset quantity range or not; and if the number is larger than the maximum value of the number range, stopping the operation of the functional module and sending alarm information to the background. The preset number range is the number of the maximum modules which can be operated by the physical machine resource, and preferably, the number range is 85-95% of the maximum module number, so that enough time can be provided for expanding the physical resource.
FIG. 4 shows a flow chart of the functional module expansion or contraction of the present invention.
As shown in fig. 4, according to the embodiment of the present invention, the node expansion or contraction performed on the functional module specifically includes:
s402, acquiring the step length of the amplification or contraction of the functional module;
s404, amplifying or shrinking the corresponding functional module according to the step length;
s406, calculating the carrying capacity of the changed functional module;
s408, judging whether the carrying capacity of the changed functional module is in a preset change condition;
s410, if the conditions are changed, continuing to perform amplification or contraction according to the step length; if the condition is not changed, the operation is stopped.
It should be noted that, in order to avoid the situation of resource waste caused by large-scale expansion or search, the present invention also sets a change step length. Wherein, the step length is the size of the unit resource of expansion, and the expansion and contraction are carried out in a quantization mode. When the function module is changed, firstly, the step length of the expansion or contraction of the function module is obtained, and the step length can be set by a technician according to actual needs and can also be dynamically adjusted by a background. Then, according to the step length, amplifying or contracting the corresponding functional module; and calculating the carrying capacity of the changed function module. Judging whether the carrying capacity of the changed functional module is in a preset change condition or not; if the conditions are changed, continuing to amplify or shrink according to the step length; if the condition is not changed, the operation is stopped. The steps of calculating the carrying capacity and judging whether the carrying capacity is in the change condition are described in detail in the above process, and are not described in detail herein.
Preferably, the step size can be set to 2 or 3, and the retraction step size is set to 1. For example, when the expansion is performed at step 2, if the demand is not reached after the expansion, the expansion is continued at step 2. If the requirement is met, but the resource is found to have redundancy, the resource can be retracted by adopting a retraction step length, and the step length is 1 at this time. By adopting the setting mode of the step length, the expectation of expansion and contraction can be reached more quickly, and quick adjustment can be carried out.
As shown in fig. 5, after performing node expansion or contraction on the functional module, the method further includes:
s502, establishing a big data platform, collecting the daily report information of the fleet nationwide and/or regional, and obtaining a corresponding report through big data comparison, analysis and modeling;
s504, a big data vehicle-mounted monitoring system is established, vehicle fleet safety information can be transmitted, existing dangers are analyzed through a big data platform, and vehicle fleets are reminded in time;
s506, establishing a fleet safety report storage platform and providing a downloading interface to the outside;
and S508, establishing a front-end friendly report display area, and displaying the report data in different areas according to different characteristics.
The big data platform is established, the safety report data of the motorcade are used as input data to be transmitted to the big data platform, and the report generation module screens the data, analyzes the data, generates a report and sends out early warning through the early warning module.
A report generation module report, which provides an external initialization interface INIT, and the interface includes a data storage address parameter and a screening file address parameter, and acquires data input by requesting the data storage address according to an agreed data request mode, acquires screening data by screening the file address parameter, and temporarily stores the acquired data for further processing.
The agreed data request mode is a method for solving the problem of repeatedly loading data by the server, and is different from a network request mode. The file loaded by the server at one time is determined by the performance of the server, physical limitation exists, and an auxiliary identifier and an appointed data step size need to be added for loading in order to correctly load the file data. An optional mode is that the total length of the files is calculated, the optimal number of the files loaded at a time is calculated according to the optimal performance ratio of the server, the two are compared to obtain an optimal solution, that is, the number of the files requested by each analysis is the optimal solution, the offset value of the file address is calculated according to the optimal solution, and the request address is calculated according to the offset value and stored for the next loading.
The input mode of the screening data is storage of an external independent file, and by the mode, the specific screening data is not realized according to the support code, but can be edited and generated by any file editor supporting the JSON file, so that the method has flexible operability and friendly post maintenance.
The definition data screening interface DATASCREEN is used for screening data and outputting screened data, and the interface definition parameter data and the parameter dataScreen accept input data and screened data, respectively. And traversing the data and calculating the factor number contained in each piece of data and the weight ratio of the factor number according to the screening data, and selecting the size of the weight ratio to eliminate the data with low relevance. The factor number refers to an influence factor influencing output data, one output is usually formed by comprehensively influencing a plurality of factors, but influences caused by each factor are different, reasonable weight values are defined for the factors, a comprehensive numerical value of all factor weight values is taken, the association degree of the data is judged according to the numerical value, the actual weight effect of the data can be comprehensively considered, if weather is raining data, the factor can be used as a factor for considering the safety of a fleet, when the friction coefficient of tires of the fleet vehicles is low, the influence of the weather on the safety of the fleet is large, and otherwise, the influence is small.
The method for factor weight includes defining the limited weight of the factor contained in the screening data input by the user as the highest, parallelly calculating the data relevance of the factor and the output result in the data according to the input data in the screening process of the factor not input with the screening data, dynamically adjusting the weight, recording the weight value of each factor after calculation and adjustment by maintaining a dynamic weight index array, and using the value as the screening factor input for the next calculation.
Defining a data analysis interface DATAANAS for starting an analysis process of data, receiving the screened effective data, and executing a corresponding strategy scheme through a strategy manager according to an output task. In the invention, an output task represents a set of independent analysis strategy codes to be executed, for the convenience of management, different analysis strategies are scheduled and maintained by a strategy manager in a global single-instance mode, the strategy manager is maintained through a parent-level abstract interface of a strategy class, namely the manager does not need to know the type of a specifically maintained strategy object, and the strategy manager is automatically converted into an abstract parent-level class object by utilizing the characteristic that the class object can be converted upwards during operation.
In the invention, a policy parent class strategy is defined, the class is a base class of all policy classes, each subsequent set of policy classes needs to inherit the class, a common interface INIT is defined in the class for a policy code execution entrance, the interface is used as a common calling method of a policy manager, and the specific implementation is realized by a specific policy class. By the method, different strategies can be conveniently expanded without influencing other strategies.
In the invention, the final report file is generated by the DATAREPORT module and stored in the appointed address of the server file, and simultaneously, a file index list is generated, the position of the server file corresponding to the file is stored, and the classification of the file index list is distinguished according to the region and the monthly time, thereby being convenient for the retrieval of the targeted region report file. Specifically, when the DATAREPORT module receives a report request, a file index list name is constructed according to parameters attached to the request, a file index file with the same name is requested from a server, and a complete file index structure, namely a storage address of the file in the server, is loaded and analyzed. Compared with the traditional full address request mode, the mode has higher data confidentiality, for example, when report data needs to be hidden, the effect of hiding the report can be realized only by encrypting the file index list without encrypting the report one by one, so that a redundant encryption and decryption process is avoided, and the performance is improved.
The invention can detect the real-time data of the fleet, analyze the real-time factor number, calculate the potential safety hazard coefficient of the fleet by combining a large amount of data calculation of a large data platform and give real-time corresponding warning to the fleet.
In the invention, a MONITOR module is established for monitoring and receiving real-time report data of a motorcade, such as the speed, road condition, temperature, tire friction coefficient and the like of the motorcade, wherein an interface ACCEPT is defined for receiving a monitoring request, the interface receives a series of parameters including the name of the motorcade and the serial number of vehicles, a new monitoring process is started, and a SOCKET variable _ SOCKET is established in the process for information interaction between the motorcade and a big data platform, including information receiving, early warning information sending and the like.
The method comprises the steps of calling an estimation interface ESIMATE of a DATAANAS module in a MONITOR module to evaluate the proportion of the number of factors in the current environment of a motorcade, comparing historical danger data with the same number of factors, calculating the proportion of the number of factors, taking an average value on the basis of massive calculation, informing the motorcade of potential safety hazards through a _ socket object if the probability is overlarge, for example, when the motorcade is in a reasonable high-speed motion state, a large data platform can acquire data of the current section of the motorcade, the weather condition of the current section of the motorcade and the like, acquire historical accident data containing the same factors, compare the proportion of accidents caused by the factors, and inform the motorcade of the probability of the possible safety accidents.
In the invention, various report files obtained by big data operation are stored by establishing the report storage server, and an interface GETRREPROT is defined for externally obtaining the required report files, so that report users can use the interface to download the report files uniformly, and a mutual transmission mode among different users can be replaced. Compared with the traditional scattered storage, the method has backup performance and correctness, and can reduce data loss caused by human and data modification caused by human in the report transmission process. The report user can obtain all the data open to the outside at one time through the interface, including the report data outside the region and in the whole country, and the mode provides data acquisition and data native friendliness for the fleet safety report.
In the invention, a friendly report data display mode is used for replacing the traditional report data display mode without targeted disorder, and an optional mode is a front-end interactive interface subarea, a national report data display, an abnormal fleet reminding area and an excellent fleet ranking display area.
In the invention, a block is taken as a display unit, namely a data display area, the data display area is realized by defining a class BOX, and a defined variable _ data is used for storing data to be rendered. The external binding data source through calling the instance object interface BINDDATA of the class BOX, the BINDATA interface receives a data parameter of any type, assigns the value of the parameter to a _ data variable, and calls the rendering interface RENDER of the BOX to display the data on the interface.
The RENDER interface sequentially traverses all rendering node objects, namely various components for displaying data, such as a text display frame and the like, reads a value of a variable datasource preset by the rendering node, wherein the value corresponds to a key name of _ data, retrieves a corresponding data value from the _ data according to the key name, finally RENDERs and displays, for example, if the datasource value of list items displaying all fleet information is teamlnfo, a value of _ data [ teamlnfo ] is taken, and the value of each item corresponds to each row of the list items and is sequentially assigned.
In the invention, an image display area is set for the graphic display of the data fleet safety report, when a specific report is selected, the front-end display area obtains an operable graphic list according to the selected report, and when the specific report is selected, data is read and the graphic display is made. Defining a SHADER SHOW class, wherein the SHADER SHOW class inherits a BOX class and is used for displaying specific graphs.
The shadow show provides a drawing interface for each type of graphic, such as the drawing histogram interface DRAWBAR, which calls the member variable _ graphic of the shadow show to draw the graphic. The shadow definition interface SETDATA is used for receiving data required by drawing, after the data is received by SETDATA, the SETDATA is firstly formatted, namely, the data is formatted according to the convention format for drawing, and after the data is formatted, a corresponding drawing function is called to draw a graph. Through the report mode of graphic display, not only can the data be more directly perceived, but also the comparison can be carried out in a direct-viewing manner, and the display is more friendly.
The SHADER SHOW data formatting is managed through a data formatting class, the class provides all data formatting interface outputs, and when new data layout is needed, the current requirements can be met at the class expansion interface. The method is easy to expand and is reused, and the redundancy of the codes is reduced. In the present invention, the definition class dataform is used to manage all data orchestration interfaces, which are accessed through the singleton _ instance.
The method can be used for carrying out big data operation on the data to obtain a more accurate motorcade safety model; the download of the historical report data can be provided; the significance of data expression of each report can be visually displayed, and the fleet safety model can be analyzed in regions; meanwhile, real-time detection and early warning can be carried out on the safety of the motorcade.
According to the embodiment of the invention, after the node expansion or contraction of the functional module, the method further comprises the following steps:
acquiring data information, and classifying the acquired data information;
and sending the data information after the corresponding classification to the corresponding client.
It should be noted that the functional module is disposed in the cloud device (big data platform) and serves different clients, so that the cloud device can receive data information for different clients. In addition, for the same client, the data information can be classified for different modules in the client. Further, since the modules in the present application have been pre-assembled, i.e., each module further includes at least one module component, the data information may also be classified for different module components in the module.
Certainly, in order to facilitate classification, each piece of data information includes identity information of a corresponding client and format information of a certain module in the corresponding client.
According to the embodiment of the present invention, the classifying the acquired data information specifically includes:
according to the identity of the data information, firstly determining a client to which the data information belongs;
and determining the module of the data information in the client according to the format identifier of the data information.
And finally determining the module component of the data information in the module according to the group identification of the data information.
It should be further noted that, after the classification of the data information is completed, the cloud device may send the data information to the corresponding client at a preset time. The cloud device may also temporarily store the classified data information in a group form, that is, the data information corresponding to the same module component is divided into a group, and when a data request of the client is received, the cloud device sends the data information corresponding to the group to the client according to the specific content of the data request. For example, if a client requests only data information of a table module, the cloud device sends the data information corresponding to the table module to the client.
According to the embodiment of the present invention, after sending the data information after the corresponding classification to the corresponding client, the method further includes:
determining module configuration information of an interface;
receiving corresponding data information, and converting the data information into a preset format;
and displaying dynamic interface information according to the module configuration information and the data information after format conversion.
Optionally, the client classifies the modules in advance. Further, since the modules in the present application have been pre-assembled, each functional module also includes different module components. For example, the client includes a text module, a table module, a map module, and an icon module. The module configuration information includes the type of the module to be displayed on the current interface, the module components to be configured of the module to be displayed, and the layout of each module on the interface. Of course, a plurality of interface templates may be preset, each interface template includes at least one module component, the layout form of each module component is preset, and a user may select one interface template to be displayed as an interface. In addition, after the user can select one interface template, the user can continuously modify the interface template in a personalized way. Alternatively, the interface template may be in the form of an H5 interface.
In order to facilitate displaying the corresponding data information on the corresponding module component, optionally, the module component is set to match only the data information in the preset format. Therefore, after the client receives the corresponding data information, the module component to which the data information belongs is determined according to the format identifier of the data information, and the data information is converted into the preset format corresponding to the module component to which the data information belongs. For example, the file format in the text module is unified as DOC format, and the file format in the form module is unified as XSL format.
And the client displays the modules of the corresponding types at the corresponding positions of the interface respectively according to the module configuration information, each module is configured with the module assembly to be displayed, and the corresponding data information is displayed on each module assembly according to the format identification of the data information. It should be noted that the form of the module component displaying information includes but is not limited to: text form, table form, chart form (such as line chart, sector chart or column chart, etc.). It should be noted that the interface information may be changed in real time, that is, the interface in different states is displayed according to the received data information.
It should be further described that the client may send a data request to the cloud device according to a preset interface display flow, and display interfaces in different states according to different received data information; of course, the user may also send a data request to the cloud device through the client according to actual needs to obtain desired data information for display. The frequency of sending data requests by the client is determined according to actual needs, for example, 20 milliseconds is a period, if the data information received in the current period is consistent with the data information received in the previous period, the current interface is kept unchanged, and if the data information received in the current period is inconsistent with the data information received in the previous period, the interface is refreshed according to the latest data information. Certainly, in order to ensure the working efficiency of the cloud device and reduce the load of the cloud device, after the client receives the data information sent by the cloud device each time, the client immediately cuts off the direct communication between the client and the cloud device, so that other clients can conveniently access the cloud device.
FIG. 6 is a block diagram of an automatic scaling policy operation and maintenance system according to the present invention.
As shown in fig. 6, the present invention provides an automatic scaling policy operation and maintenance system 6, which includes: a memory 61 and a processor 62, where the memory includes an automatic scaling policy operation and maintenance program, and when the automatic scaling policy operation and maintenance program is executed by the processor, the steps of the automatic scaling policy operation and maintenance method are implemented as described in any one of the above.
The third aspect of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes an automatic scaling policy operation and maintenance program, and when the automatic scaling policy operation and maintenance program is executed by a processor, the method implements the steps of the automatic scaling policy operation and maintenance method described in any one of the above.
According to the automatic scaling strategy operation and maintenance method, system and readable storage medium, the carrying capacity of the functional module is calculated through the module influence factor and the load information, and dynamic adjustment is performed according to the carrying capacity, so that resources can be reasonably utilized in a larger mode. The utilization rate of various hardware such as a CPU of the module is obtained, the carrying capacity value is obtained by calculating through different weighting factors, the carrying capacity of the module can be reflected more directly, and therefore timely adjustment is carried out. The communication between different modules of the invention is communicated through the middleware, thereby achieving the effect of loose coupling.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An automatic scaling strategy operation and maintenance method is characterized by comprising the following steps:
acquiring function information, function module information and load information of a system;
acquiring an influence factor of each function according to the function information;
calculating the carrying capacity of a function module corresponding to each function according to the influence factor of each function and the obtained load information of the system, and setting different function modules aiming at different functions, wherein the function modules comprise physical servers or/and node modules realized by virtual machines;
judging whether the carrying capacity meets a preset change condition;
if the condition is met, when the functional module is overloaded, the node of the functional module is expanded, and when the resource redundancy of the functional module is wasted, the node of the functional module is contracted.
2. The operation and maintenance method according to claim 1, wherein the step of calculating the carrying capacity of the corresponding function module according to the impact factor and the load information comprises:
the load information comprises the utilization rates of the CPU, the memory and the disk of the functional module, and the utilization rates of the CPU, the memory and the disk of the functional module are obtained;
respectively acquiring weighting factors of the utilization rates of a CPU, a memory and a disk;
calculating the carrying capacity of the function module by the following formula:
S=w×(w1C+w2R+w3D)
wherein S is the carrying capacity value of the function module, w is the influence factor of the function module, w1Weighting factors for CPU usage, w2As a weighting factor for memory usage, w3Is a weighting factor of disk usage, C is CPU usage, R is memory usage, D is disk usage, where w is1+w2+w3=1。
3. The method according to claim 1, wherein the preset change condition comprises: the carrying capacity is larger than an early warning threshold value and smaller than a redundancy threshold value, and when the carrying capacity is larger than the early warning threshold value, node amplification is carried out on the functional module; and when the carrying capacity is smaller than the redundancy threshold value, performing node contraction on the functional module.
4. The method of claim 1, further comprising:
receiving a module start or stop request;
and starting or stopping the mirror images of the modules according to the starting or stopping request, wherein different mirror images are established for different functional modules, a corresponding docker container is configured in each mirror image, and the functional modules run through the docker containers.
5. The method of claim 1, further comprising:
acquiring the running number information of the functional modules;
judging whether the quantity information exceeds a preset quantity range or not;
and if the number is larger than the maximum value of the number range, stopping the operation of the functional module and sending alarm information to the background.
6. The method according to claim 1, wherein the step of performing node expansion or node contraction on the functional module comprises:
acquiring the step length of the expansion or contraction of the functional module;
according to the step length, amplifying or shrinking the corresponding functional module;
calculating the carrying capacity of the changed function module;
judging whether the carrying capacity of the changed functional module is in a preset change condition or not;
if the conditions are changed, continuing to amplify or shrink according to the step length; if the condition is not changed, the operation is stopped.
7. The method of claim 1, wherein before obtaining the function information, the function module information, and the load information of the system, the method further comprises:
dividing an original system into different subsystems according to functions;
establishing mirror image subsystem files for each subsystem;
deploying a system on an application container engine;
and receiving a selection instruction of a user, and loading system resources preset in the public cloud platform into the target system.
8. The method of claim 1, wherein after performing node expansion or contraction on the functional module, the method further comprises:
establishing a big data platform, collecting the daily report information of the fleet nationwide and/or regional, and obtaining a corresponding report through big data comparison, analysis and modeling;
the big data vehicle-mounted monitoring system is established, motorcade safety information can be transmitted, existing dangers can be analyzed through a big data platform, and motorcade can be reminded in time;
establishing a motorcade safety report storage platform and setting a downloading interface;
and establishing a front-end friendly report display area, and displaying the report data in different areas according to different characteristics.
9. An automatic scaling strategy operation and maintenance system, characterized in that the system comprises: a memory and a processor, wherein the memory includes an auto scaling policy operation and maintenance program, and when the auto scaling policy operation and maintenance program is executed by the processor, the steps of the auto scaling policy operation and maintenance method according to any one of claims 1 to 8 are implemented.
10. A computer-readable storage medium, wherein the computer-readable storage medium includes an auto scaling policy operation and maintenance program, and when the auto scaling policy operation and maintenance program is executed by a processor, the steps of the auto scaling policy operation and maintenance method according to any one of claims 1 to 8 are implemented.
CN201911250556.5A 2019-12-09 2019-12-09 Automatic scaling strategy operation and maintenance method, system and readable storage medium Pending CN111176834A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911250556.5A CN111176834A (en) 2019-12-09 2019-12-09 Automatic scaling strategy operation and maintenance method, system and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911250556.5A CN111176834A (en) 2019-12-09 2019-12-09 Automatic scaling strategy operation and maintenance method, system and readable storage medium

Publications (1)

Publication Number Publication Date
CN111176834A true CN111176834A (en) 2020-05-19

Family

ID=70648753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911250556.5A Pending CN111176834A (en) 2019-12-09 2019-12-09 Automatic scaling strategy operation and maintenance method, system and readable storage medium

Country Status (1)

Country Link
CN (1) CN111176834A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328184A (en) * 2020-12-03 2021-02-05 北京联创信安科技股份有限公司 Cluster capacity expansion method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101176A (en) * 2016-05-27 2016-11-09 成都索贝数码科技股份有限公司 The media cloud that melts of a kind of integration produces delivery system and method
CN108829409A (en) * 2018-06-20 2018-11-16 泰华智慧产业集团股份有限公司 A kind of distributed system quick deployment method and system
CN110442453A (en) * 2019-08-01 2019-11-12 佛山普瑞威尔科技有限公司 A kind of automatic telescopic strategy O&M method, system and readable storage medium storing program for executing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101176A (en) * 2016-05-27 2016-11-09 成都索贝数码科技股份有限公司 The media cloud that melts of a kind of integration produces delivery system and method
CN108829409A (en) * 2018-06-20 2018-11-16 泰华智慧产业集团股份有限公司 A kind of distributed system quick deployment method and system
CN110442453A (en) * 2019-08-01 2019-11-12 佛山普瑞威尔科技有限公司 A kind of automatic telescopic strategy O&M method, system and readable storage medium storing program for executing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328184A (en) * 2020-12-03 2021-02-05 北京联创信安科技股份有限公司 Cluster capacity expansion method, device, equipment and storage medium
CN112328184B (en) * 2020-12-03 2023-11-21 北京联创信安科技股份有限公司 Cluster capacity expansion method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US20080209275A1 (en) Test framework for testing an application
CN112148494B (en) Processing method and device for operator service, intelligent workstation and electronic equipment
CN112199385A (en) Processing method and device for artificial intelligence AI, electronic equipment and storage medium
CN108021400B (en) Data processing method and device, computer storage medium and equipment
CN112202899B (en) Workflow processing method and device, intelligent workstation and electronic equipment
CN112035516B (en) Processing method and device for operator service, intelligent workstation and electronic equipment
CN107908521A (en) A kind of monitoring method of container performance on the server performance and node being applied under cloud environment
US11720825B2 (en) Framework for multi-tenant data science experiments at-scale
CN112069204A (en) Processing method and device for operator service, intelligent workstation and electronic equipment
CN110347375B (en) Resource combination type virtual comprehensive natural environment framework and method for virtual test
CN114791846B (en) Method for realizing observability aiming at cloud-originated chaos engineering experiment
CN111338786A (en) Quota management method and device for cloud platform resources and computer equipment
CN112069205A (en) Processing method and device for business application, intelligent workstation and electronic equipment
CN114706690B (en) Method and system for sharing GPU (graphics processing Unit) by Kubernetes container
US9455865B2 (en) Server virtualization
CN111176834A (en) Automatic scaling strategy operation and maintenance method, system and readable storage medium
CN115392501A (en) Data acquisition method and device, electronic equipment and storage medium
CN113326052A (en) Method and device for upgrading service component, computer equipment and storage medium
JP4430908B2 (en) Multi-window display control device and computer system using the same
CN112785418A (en) Credit risk modeling method, credit risk modeling device, credit risk modeling equipment and computer readable storage medium
CN112100414A (en) Data processing method, device, system and computer readable storage medium
CN112817635B (en) Model processing method and data processing system
US11620290B2 (en) Method and system for performing data cloud operations
CN102902825B (en) A kind of database optimizing method and device
CN116450465B (en) Data processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination